Sample records for human visual detection

  1. Research on metallic material defect detection based on bionic sensing of human visual properties

    NASA Astrophysics Data System (ADS)

    Zhang, Pei Jiang; Cheng, Tao

    2018-05-01

    Due to the fact that human visual system can quickly lock the areas of interest in complex natural environment and focus on it, this paper proposes an eye-based visual attention mechanism by simulating human visual imaging features based on human visual attention mechanism Bionic Sensing Visual Inspection Model Method to Detect Defects of Metallic Materials in the Mechanical Field. First of all, according to the biologically visually significant low-level features, the mark of defect experience marking is used as the intermediate feature of simulated visual perception. Afterwards, SVM method was used to train the advanced features of visual defects of metal material. According to the weight of each party, the biometrics detection model of metal material defect, which simulates human visual characteristics, is obtained.

  2. Breaking Snake Camouflage: Humans Detect Snakes More Accurately than Other Animals under Less Discernible Visual Conditions.

    PubMed

    Kawai, Nobuyuki; He, Hongshen

    2016-01-01

    Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions.

  3. Breaking Snake Camouflage: Humans Detect Snakes More Accurately than Other Animals under Less Discernible Visual Conditions

    PubMed Central

    He, Hongshen

    2016-01-01

    Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions. PMID:27783686

  4. Multilevel depth and image fusion for human activity detection.

    PubMed

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.

  5. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy.

    PubMed

    Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W Tecumseh

    2012-07-19

    Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups.

  6. Production and perception rules underlying visual patterns: effects of symmetry and hierarchy

    PubMed Central

    Westphal-Fitch, Gesche; Huber, Ludwig; Gómez, Juan Carlos; Fitch, W. Tecumseh

    2012-01-01

    Formal language theory has been extended to two-dimensional patterns, but little is known about two-dimensional pattern perception. We first examined spontaneous two-dimensional visual pattern production by humans, gathered using a novel touch screen approach. Both spontaneous creative production and subsequent aesthetic ratings show that humans prefer ordered, symmetrical patterns over random patterns. We then further explored pattern-parsing abilities in different human groups, and compared them with pigeons. We generated visual plane patterns based on rules varying in complexity. All human groups tested, including children and individuals diagnosed with autism spectrum disorder (ASD), were able to detect violations of all production rules tested. Our ASD participants detected pattern violations with the same speed and accuracy as matched controls. Children's ability to detect violations of a relatively complex rotational rule correlated with age, whereas their ability to detect violations of a simple translational rule did not. By contrast, even with extensive training, pigeons were unable to detect orientation-based structural violations, suggesting that, unlike humans, they did not learn the underlying structural rules. Visual two-dimensional patterns offer a promising new formally-grounded way to investigate pattern production and perception in general, widely applicable across species and age groups. PMID:22688636

  7. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  8. Subthalamic nucleus detects unnatural android movement.

    PubMed

    Ikeda, Takashi; Hirata, Masayuki; Kasaki, Masashi; Alimardani, Maryam; Matsushita, Kojiro; Yamamoto, Tomoyuki; Nishio, Shuichi; Ishiguro, Hiroshi

    2017-12-19

    An android, i.e., a realistic humanoid robot with human-like capabilities, may induce an uncanny feeling in human observers. The uncanny feeling about an android has two main causes: its appearance and movement. The uncanny feeling about an android increases when its appearance is almost human-like but its movement is not fully natural or comparable to human movement. Even if an android has human-like flexible joints, its slightly jerky movements cause a human observer to detect subtle unnaturalness in them. However, the neural mechanism underlying the detection of unnatural movements remains unclear. We conducted an fMRI experiment to compare the observation of an android and the observation of a human on which the android is modelled, and we found differences in the activation pattern of the brain regions that are responsible for the production of smooth and natural movement. More specifically, we found that the visual observation of the android, compared with that of the human model, caused greater activation in the subthalamic nucleus (STN). When the android's slightly jerky movements are visually observed, the STN detects their subtle unnaturalness. This finding suggests that the detection of unnatural movements is attributed to an error signal resulting from a mismatch between a visual input and an internal model for smooth movement.

  9. The visual analysis of emotional actions.

    PubMed

    Chouchourelou, Arieta; Matsuka, Toshihiko; Harber, Kent; Shiffrar, Maggie

    2006-01-01

    Is the visual analysis of human actions modulated by the emotional content of those actions? This question is motivated by a consideration of the neuroanatomical connections between visual and emotional areas. Specifically, the superior temporal sulcus (STS), known to play a critical role in the visual detection of action, is extensively interconnected with the amygdala, a center for emotion processing. To the extent that amygdala activity influences STS activity, one would expect to find systematic differences in the visual detection of emotional actions. A series of psychophysical studies tested this prediction. Experiment 1 identified point-light walker movies that convincingly depicted five different emotional states: happiness, sadness, neutral, anger, and fear. In Experiment 2, participants performed a walker detection task with these movies. Detection performance was systematically modulated by the emotional content of the gaits. Participants demonstrated the greatest visual sensitivity to angry walkers. The results of Experiment 3 suggest that local velocity cues to anger may account for high false alarm rates to the presence of angry gaits. These results support the hypothesis that the visual analysis of human action depends upon emotion processes.

  10. Comparison of visual sensitivity to human and object motion in autism spectrum disorder.

    PubMed

    Kaiser, Martha D; Delmolino, Lara; Tanaka, James W; Shiffrar, Maggie

    2010-08-01

    Successful social behavior requires the accurate detection of other people's movements. Consistent with this, typical observers demonstrate enhanced visual sensitivity to human movement relative to equally complex, nonhuman movement [e.g., Pinto & Shiffrar, 2009]. A psychophysical study investigated visual sensitivity to human motion relative to object motion in observers with autism spectrum disorder (ASD). Participants viewed point-light depictions of a moving person and, for comparison, a moving tractor and discriminated between coherent and scrambled versions of these stimuli in unmasked and masked displays. There were three groups of participants: young adults with ASD, typically developing young adults, and typically developing children. Across masking conditions, typical observers showed enhanced visual sensitivity to human movement while observers in the ASD group did not. Because the human body is an inherently social stimulus, this result is consistent with social brain theories [e.g., Pelphrey & Carter, 2008; Schultz, 2005] and suggests that the visual systems of individuals with ASD may not be tuned for the detection of socially relevant information such as the presence of another person. Reduced visual sensitivity to human movements could compromise important social behaviors including, for example, gesture comprehension.

  11. Spatial interactions reveal inhibitory cortical networks in human amblyopia.

    PubMed

    Wong, Erwin H; Levi, Dennis M; McGraw, Paul V

    2005-10-01

    Humans with amblyopia have a well-documented loss of sensitivity for first-order, or luminance defined, visual information. Recent studies show that they also display a specific loss of sensitivity for second-order, or contrast defined, visual information; a type of image structure encoded by neurons found predominantly in visual area A18/V2. In the present study, we investigate whether amblyopia disrupts the normal architecture of spatial interactions in V2 by determining the contrast detection threshold of a second-order target in the presence of second-order flanking stimuli. Adjacent flanks facilitated second-order detectability in normal observers. However, in marked contrast, they suppressed detection in each eye of the majority of amblyopic observers. Furthermore, strabismic observers with no loss of visual acuity show a similar pattern of detection suppression. We speculate that amblyopia results in predominantly inhibitory cortical interactions between second-order neurons.

  12. Organic light emitting board for dynamic interactive display

    PubMed Central

    Kim, Eui Hyuk; Cho, Sung Hwan; Lee, Ju Han; Jeong, Beomjin; Kim, Richard Hahnkee; Yu, Seunggun; Lee, Tae-Woo; Shim, Wooyoung; Park, Cheolmin

    2017-01-01

    Interactive displays involve the interfacing of a stimuli-responsive sensor with a visual human-readable response. Here, we describe a polymeric electroluminescence-based stimuli-responsive display method that simultaneously detects external stimuli and visualizes the stimulant object. This organic light-emitting board is capable of both sensing and direct visualization of a variety of conductive information. Simultaneous sensing and visualization of the conductive substance is achieved when the conductive object is coupled with the light emissive material layer on application of alternating current. A variety of conductive materials can be detected regardless of their work functions, and thus information written by a conductive pen is clearly visualized, as is a human fingerprint with natural conductivity. Furthermore, we demonstrate that integration of the organic light-emitting board with a fluidic channel readily allows for dynamic monitoring of metallic liquid flow through the channel, which may be suitable for biological detection and imaging applications. PMID:28406151

  13. Organic light emitting board for dynamic interactive display

    NASA Astrophysics Data System (ADS)

    Kim, Eui Hyuk; Cho, Sung Hwan; Lee, Ju Han; Jeong, Beomjin; Kim, Richard Hahnkee; Yu, Seunggun; Lee, Tae-Woo; Shim, Wooyoung; Park, Cheolmin

    2017-04-01

    Interactive displays involve the interfacing of a stimuli-responsive sensor with a visual human-readable response. Here, we describe a polymeric electroluminescence-based stimuli-responsive display method that simultaneously detects external stimuli and visualizes the stimulant object. This organic light-emitting board is capable of both sensing and direct visualization of a variety of conductive information. Simultaneous sensing and visualization of the conductive substance is achieved when the conductive object is coupled with the light emissive material layer on application of alternating current. A variety of conductive materials can be detected regardless of their work functions, and thus information written by a conductive pen is clearly visualized, as is a human fingerprint with natural conductivity. Furthermore, we demonstrate that integration of the organic light-emitting board with a fluidic channel readily allows for dynamic monitoring of metallic liquid flow through the channel, which may be suitable for biological detection and imaging applications.

  14. Colour and luminance contrasts predict the human detection of natural stimuli in complex visual environments.

    PubMed

    White, Thomas E; Rojas, Bibiana; Mappes, Johanna; Rautiala, Petri; Kemp, Darrell J

    2017-09-01

    Much of what we know about human colour perception has come from psychophysical studies conducted in tightly-controlled laboratory settings. An enduring challenge, however, lies in extrapolating this knowledge to the noisy conditions that characterize our actual visual experience. Here we combine statistical models of visual perception with empirical data to explore how chromatic (hue/saturation) and achromatic (luminant) information underpins the detection and classification of stimuli in a complex forest environment. The data best support a simple linear model of stimulus detection as an additive function of both luminance and saturation contrast. The strength of each predictor is modest yet consistent across gross variation in viewing conditions, which accords with expectation based upon general primate psychophysics. Our findings implicate simple visual cues in the guidance of perception amidst natural noise, and highlight the potential for informing human vision via a fusion between psychophysical modelling and real-world behaviour. © 2017 The Author(s).

  15. A visual model for object detection based on active contours and level-set method.

    PubMed

    Satoh, Shunji

    2006-09-01

    A visual model for object detection is proposed. In order to make the detection ability comparable with existing technical methods for object detection, an evolution equation of neurons in the model is derived from the computational principle of active contours. The hierarchical structure of the model emerges naturally from the evolution equation. One drawback involved with initial values of active contours is alleviated by introducing and formulating convexity, which is a visual property. Numerical experiments show that the proposed model detects objects with complex topologies and that it is tolerant of noise. A visual attention model is introduced into the proposed model. Other simulations show that the visual properties of the model are consistent with the results of psychological experiments that disclose the relation between figure-ground reversal and visual attention. We also demonstrate that the model tends to perceive smaller regions as figures, which is a characteristic observed in human visual perception.

  16. Scene and human face recognition in the central vision of patients with glaucoma

    PubMed Central

    Aptel, Florent; Attye, Arnaud; Guyader, Nathalie; Boucart, Muriel; Chiquet, Christophe; Peyrin, Carole

    2018-01-01

    Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test. PMID:29481572

  17. Studying the lower limit of human vision with a single-photon source

    NASA Astrophysics Data System (ADS)

    Holmes, Rebecca; Christensen, Bradley; Street, Whitney; Wang, Ranxiao; Kwiat, Paul

    2015-05-01

    Humans can detect a visual stimulus of just a few photons. Exactly how few is not known--psychological and physiological research have suggested that the detection threshold may be as low as one photon, but the question has never been directly tested. Using a source of heralded single photons based on spontaneous parametric downconversion, we can directly characterize the lower limit of vision. This system can also be used to study temporal and spatial integration in the visual system, and to study visual attention with EEG. We may eventually even be able to investigate how human observers perceive quantum effects such as superposition and entanglement. Our progress and some preliminary results will be discussed.

  18. Application of local binary pattern and human visual Fibonacci texture features for classification different medical images

    NASA Astrophysics Data System (ADS)

    Sanghavi, Foram; Agaian, Sos

    2017-05-01

    The goal of this paper is to (a) test the nuclei based Computer Aided Cancer Detection system using Human Visual based system on the histopathology images and (b) Compare the results of the proposed system with the Local Binary Pattern and modified Fibonacci -p pattern systems. The system performance is evaluated using different parameters such as accuracy, specificity, sensitivity, positive predictive value, and negative predictive value on 251 prostate histopathology images. The accuracy of 96.69% was observed for cancer detection using the proposed human visual based system compared to 87.42% and 94.70% observed for Local Binary patterns and the modified Fibonacci p patterns.

  19. Foveated model observers to predict human performance in 3D images

    NASA Astrophysics Data System (ADS)

    Lago, Miguel A.; Abbey, Craig K.; Eckstein, Miguel P.

    2017-03-01

    We evaluate 3D search requires model observers that take into account the peripheral human visual processing (foveated models) to predict human observer performance. We show that two different 3D tasks, free search and location-known detection, influence the relative human visual detectability of two signals of different sizes in synthetic backgrounds mimicking the noise found in 3D digital breast tomosynthesis. One of the signals resembled a microcalcification (a small and bright sphere), while the other one was designed to look like a mass (a larger Gaussian blob). We evaluated current standard models observers (Hotelling; Channelized Hotelling; non-prewhitening matched filter with eye filter, NPWE; and non-prewhitening matched filter model, NPW) and showed that they incorrectly predict the relative detectability of the two signals in 3D search. We propose a new model observer (3D Foveated Channelized Hotelling Observer) that incorporates the properties of the visual system over a large visual field (fovea and periphery). We show that the foveated model observer can accurately predict the rank order of detectability of the signals in 3D images for each task. Together, these results motivate the use of a new generation of foveated model observers for predicting image quality for search tasks in 3D imaging modalities such as digital breast tomosynthesis or computed tomography.

  20. Is attentional prioritisation of infant faces unique in humans?: Comparative demonstrations by modified dot-probe task in monkeys.

    PubMed

    Koda, Hiroki; Sato, Anna; Kato, Akemi

    2013-09-01

    Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Learning rational temporal eye movement strategies.

    PubMed

    Hoppe, David; Rothkopf, Constantin A

    2016-07-19

    During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans' characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.

  2. Feedforward and recurrent processing in scene segmentation: electroencephalography and functional magnetic resonance imaging.

    PubMed

    Scholte, H Steven; Jolij, Jacob; Fahrenfort, Johannes J; Lamme, Victor A F

    2008-11-01

    In texture segregation, an example of scene segmentation, we can discern two different processes: texture boundary detection and subsequent surface segregation [Lamme, V. A. F., Rodriguez-Rodriguez, V., & Spekreijse, H. Separate processing dynamics for texture elements, boundaries and surfaces in primary visual cortex of the macaque monkey. Cerebral Cortex, 9, 406-413, 1999]. Neural correlates of texture boundary detection have been found in monkey V1 [Sillito, A. M., Grieve, K. L., Jones, H. E., Cudeiro, J., & Davis, J. Visual cortical mechanisms detecting focal orientation discontinuities. Nature, 378, 492-496, 1995; Grosof, D. H., Shapley, R. M., & Hawken, M. J. Macaque-V1 neurons can signal illusory contours. Nature, 365, 550-552, 1993], but whether surface segregation occurs in monkey V1 [Rossi, A. F., Desimone, R., & Ungerleider, L. G. Contextual modulation in primary visual cortex of macaques. Journal of Neuroscience, 21, 1698-1709, 2001; Lamme, V. A. F. The neurophysiology of figure ground segregation in primary visual-cortex. Journal of Neuroscience, 15, 1605-1615, 1995], and whether boundary detection or surface segregation signals can also be measured in human V1, is more controversial [Kastner, S., De Weerd, P., & Ungerleider, L. G. Texture segregation in the human visual cortex: A functional MRI study. Journal of Neurophysiology, 83, 2453-2457, 2000]. Here we present electroencephalography (EEG) and functional magnetic resonance imaging data that have been recorded with a paradigm that makes it possible to differentiate between boundary detection and scene segmentation in humans. In this way, we were able to show with EEG that neural correlates of texture boundary detection are first present in the early visual cortex around 92 msec and then spread toward the parietal and temporal lobes. Correlates of surface segregation first appear in temporal areas (around 112 msec) and from there appear to spread to parietal, and back to occipital areas. After 208 msec, correlates of surface segregation and boundary detection also appear in more frontal areas. Blood oxygenation level-dependent magnetic resonance imaging results show correlates of boundary detection and surface segregation in all early visual areas including V1. We conclude that texture boundaries are detected in a feedforward fashion and are represented at increasing latencies in higher visual areas. Surface segregation, on the other hand, is represented in "reverse hierarchical" fashion and seems to arise from feedback signals toward early visual areas such as V1.

  3. Privileged Detection of Conspecifics: Evidence from Inversion Effects during Continuous Flash Suppression

    ERIC Educational Resources Information Center

    Stein, Timo; Sterzer, Philipp; Peelen, Marius V.

    2012-01-01

    The rapid visual detection of other people in our environment is an important first step in social cognition. Here we provide evidence for selective sensitivity of the human visual system to upright depictions of conspecifics. In a series of seven experiments, we assessed the impact of stimulus inversion on the detection of person silhouettes,…

  4. Changes in Women’s Facial Skin Color over the Ovulatory Cycle are Not Detectable by the Human Visual System

    PubMed Central

    Burriss, Robert P.; Troscianko, Jolyon; Lovell, P. George; Fulford, Anthony J. C.; Stevens, Martin; Quigley, Rachael; Payne, Jenny; Saxton, Tamsin K.; Rowland, Hannah M.

    2015-01-01

    Human ovulation is not advertised, as it is in several primate species, by conspicuous sexual swellings. However, there is increasing evidence that the attractiveness of women’s body odor, voice, and facial appearance peak during the fertile phase of their ovulatory cycle. Cycle effects on facial attractiveness may be underpinned by changes in facial skin color, but it is not clear if skin color varies cyclically in humans or if any changes are detectable. To test these questions we photographed women daily for at least one cycle. Changes in facial skin redness and luminance were then quantified by mapping the digital images to human long, medium, and shortwave visual receptors. We find cyclic variation in skin redness, but not luminance. Redness decreases rapidly after menstrual onset, increases in the days before ovulation, and remains high through the luteal phase. However, we also show that this variation is unlikely to be detectable by the human visual system. We conclude that changes in skin color are not responsible for the effects of the ovulatory cycle on women’s attractiveness. PMID:26134671

  5. Changes in Women's Facial Skin Color over the Ovulatory Cycle are Not Detectable by the Human Visual System.

    PubMed

    Burriss, Robert P; Troscianko, Jolyon; Lovell, P George; Fulford, Anthony J C; Stevens, Martin; Quigley, Rachael; Payne, Jenny; Saxton, Tamsin K; Rowland, Hannah M

    2015-01-01

    Human ovulation is not advertised, as it is in several primate species, by conspicuous sexual swellings. However, there is increasing evidence that the attractiveness of women's body odor, voice, and facial appearance peak during the fertile phase of their ovulatory cycle. Cycle effects on facial attractiveness may be underpinned by changes in facial skin color, but it is not clear if skin color varies cyclically in humans or if any changes are detectable. To test these questions we photographed women daily for at least one cycle. Changes in facial skin redness and luminance were then quantified by mapping the digital images to human long, medium, and shortwave visual receptors. We find cyclic variation in skin redness, but not luminance. Redness decreases rapidly after menstrual onset, increases in the days before ovulation, and remains high through the luteal phase. However, we also show that this variation is unlikely to be detectable by the human visual system. We conclude that changes in skin color are not responsible for the effects of the ovulatory cycle on women's attractiveness.

  6. Engineering Data Compendium. Human Perception and Performance. Volume 2

    DTIC Science & Technology

    1988-01-01

    Stimulation 5.1014 5.1004 Auditory Detection in the Presence of Visual Stimulation 5.1015 5.1005 Tactual Detection and Discrimination in the Presence of...Accessory Stimulation 5.1016 5.1006 Tactile Versus Auditory Localization of Sound 5.1007 Spatial Localization in the Presence of Inter- 5.1017...York: Wiley. Cross References 5.1004 Auditory detection in the presence of visual stimulation ; 5.1005 Tactual detection and dis- crimination in

  7. Visual Processing of Object Velocity and Acceleration

    DTIC Science & Technology

    1994-02-04

    A failure of motion deblurring in the human visual system. Investigative Opthalmology and Visual Sciences (Suppl),34, 1230 Watamaniuk, S.N.J. and...McKee, S.P. Why is a trajectory more detectable in noise than correlated signal dots? Investigative Opthalmology and Visual Sciences (Suppl),34, 1364

  8. CHARACTERIZATION OF THE EFFECTS OF INHALED PERCHLOROETHYLENE ON SUSTAINED ATTENTION IN RATS PERFORMING A VISUAL SIGNAL DETECTION TASK

    EPA Science Inventory

    The aliphatic hydrocarbon perchloroethyelene (PCE) has been associated with neurobehavioral dysfunction including reduced attention in humans. The current study sought to assess the effects of inhaled PCE on sustained attention in rats performing a visual signal detection task (S...

  9. The Earliest Electrophysiological Correlate of Visual Awareness?

    ERIC Educational Resources Information Center

    Koivisto, Mika; Lahteenmaki, Mikko; Sorensen, Thomas Alrik; Vangkilde, Signe; Overgaard, Morten; Revonsuo, Antti

    2008-01-01

    To examine the neural correlates and timing of human visual awareness, we recorded event-related potentials (ERPs) in two experiments while the observers were detecting a grey dot that was presented near subjective threshold. ERPs were averaged for conscious detections of the stimulus (hits) and nondetections (misses) separately. Our results…

  10. Testing visual short-term memory of pigeons (Columba livia) and a rhesus monkey (Macaca mulatta) with a location change detection task.

    PubMed

    Leising, Kenneth J; Elmore, L Caitlin; Rivera, Jacquelyne J; Magnotti, John F; Katz, Jeffrey S; Wright, Anthony A

    2013-09-01

    Change detection is commonly used to assess capacity (number of objects) of human visual short-term memory (VSTM). Comparisons with the performance of non-human animals completing similar tasks have shown similarities and differences in object-based VSTM, which is only one aspect ("what") of memory. Another important aspect of memory, which has received less attention, is spatial short-term memory for "where" an object is in space. In this article, we show for the first time that a monkey and pigeons can be accurately trained to identify location changes, much as humans do, in change detection tasks similar to those used to test object capacity of VSTM. The subject's task was to identify (touch/peck) an item that changed location across a brief delay. Both the monkey and pigeons showed transfer to delays longer than the training delay, to greater and smaller distance changes than in training, and to novel colors. These results are the first to demonstrate location-change detection in any non-human species and encourage comparative investigations into the nature of spatial and visual short-term memory.

  11. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man

    PubMed Central

    Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M.; Van Opstal, A. J.

    2017-01-01

    The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain. PMID:29238295

  12. Audio-Visual Integration in a Redundant Target Paradigm: A Comparison between Rhesus Macaque and Man.

    PubMed

    Bremen, Peter; Massoudi, Rooholla; Van Wanrooij, Marc M; Van Opstal, A J

    2017-01-01

    The mechanisms underlying multi-sensory interactions are still poorly understood despite considerable progress made since the first neurophysiological recordings of multi-sensory neurons. While the majority of single-cell neurophysiology has been performed in anesthetized or passive-awake laboratory animals, the vast majority of behavioral data stems from studies with human subjects. Interpretation of neurophysiological data implicitly assumes that laboratory animals exhibit perceptual phenomena comparable or identical to those observed in human subjects. To explicitly test this underlying assumption, we here characterized how two rhesus macaques and four humans detect changes in intensity of auditory, visual, and audio-visual stimuli. These intensity changes consisted of a gradual envelope modulation for the sound, and a luminance step for the LED. Subjects had to detect any perceived intensity change as fast as possible. By comparing the monkeys' results with those obtained from the human subjects we found that (1) unimodal reaction times differed across modality, acoustic modulation frequency, and species, (2) the largest facilitation of reaction times with the audio-visual stimuli was observed when stimulus onset asynchronies were such that the unimodal reactions would occur at the same time (response, rather than physical synchrony), and (3) the largest audio-visual reaction-time facilitation was observed when unimodal auditory stimuli were difficult to detect, i.e., at slow unimodal reaction times. We conclude that despite marked unimodal heterogeneity, similar multisensory rules applied to both species. Single-cell neurophysiology in the rhesus macaque may therefore yield valuable insights into the mechanisms governing audio-visual integration that may be informative of the processes taking place in the human brain.

  13. Traffic Sign Detection Based on Biologically Visual Mechanism

    NASA Astrophysics Data System (ADS)

    Hu, X.; Zhu, X.; Li, D.

    2012-07-01

    TSR (Traffic sign recognition) is an important problem in ITS (intelligent traffic system), which is being paid more and more attention for realizing drivers assisting system and unmanned vehicle etc. TSR consists of two steps: detection and recognition, and this paper describe a new traffic sign detection method. The design principle of the traffic sign is comply with the visual attention mechanism of human, so we propose a method using visual attention mechanism to detect traffic sign ,which is reasonable. In our method, the whole scene will firstly be analyzed by visual attention model to acquire the area where traffic signs might be placed. And then, these candidate areas will be analyzed according to the shape characteristics of the traffic sign to detect traffic signs. In traffic sign detection experiments, the result shows the proposed method is effectively and robust than other existing saliency detection method.

  14. Human visual system-based smoking event detection

    NASA Astrophysics Data System (ADS)

    Odetallah, Amjad D.; Agaian, Sos S.

    2012-06-01

    Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks, airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the object's shape, texture and color. In addition, its visual features will change under different lighting and weather conditions. This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of images. In developed system, motion detection and background subtraction are combined with motion-region-saving, skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions with various cigarette sizes, colors, and shapes.

  15. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    PubMed Central

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew

    2008-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755

  16. Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2011-01-01

    This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.

  17. Detection experiments with humans implicate visual predation as a driver of colour polymorphism dynamics in pygmy grasshoppers

    PubMed Central

    2013-01-01

    Background Animal colour patterns offer good model systems for studies of biodiversity and evolution of local adaptations. An increasingly popular approach to study the role of selection for camouflage for evolutionary trajectories of animal colour patterns is to present images of prey on paper or computer screens to human ‘predators’. Yet, few attempts have been made to confirm that rates of detection by humans can predict patterns of selection and evolutionary modifications of prey colour patterns in nature. In this study, we first analyzed encounters between human ‘predators’ and images of natural black, grey and striped colour morphs of the polymorphic Tetrix subulata pygmy grasshoppers presented on background images of unburnt, intermediate or completely burnt natural habitats. Next, we compared detection rates with estimates of capture probabilities and survival of free-ranging grasshoppers, and with estimates of relative morph frequencies in natural populations. Results The proportion of grasshoppers that were detected and time to detection depended on both the colour pattern of the prey and on the type of visual background. Grasshoppers were detected more often and faster on unburnt backgrounds than on 50% and 100% burnt backgrounds. Striped prey were detected less often than grey or black prey on unburnt backgrounds; grey prey were detected more often than black or striped prey on 50% burnt backgrounds; and black prey were detected less often than grey prey on 100% burnt backgrounds. Rates of detection mirrored previously reported rates of capture by humans of free-ranging grasshoppers, as well as morph specific survival in the wild. Rates of detection were also correlated with frequencies of striped, black and grey morphs in samples of T. subulata from natural populations that occupied the three habitat types used for the detection experiment. Conclusions Our findings demonstrate that crypsis is background-dependent, and implicate visual predation as an important driver of evolutionary modifications of colour polymorphism in pygmy grasshoppers. Our study provides the clearest evidence to date that using humans as ‘predators’ in detection experiments may provide reliable information on the protective values of prey colour patterns and of natural selection and microevolution of camouflage in the wild. PMID:23639215

  18. Recent Visual Experience Shapes Visual Processing in Rats through Stimulus-Specific Adaptation and Response Enhancement.

    PubMed

    Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans

    2017-03-20

    From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. The Comparison of Visual Working Memory Representations with Perceptual Inputs

    ERIC Educational Resources Information Center

    Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew; Luck, Steven J.

    2009-01-01

    The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. In this study, the authors tested the hypothesis that differences between the memory of a stimulus array and the perception of a…

  20. Detection and recognition of simple spatial forms

    NASA Technical Reports Server (NTRS)

    Watson, A. B.

    1983-01-01

    A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.

  1. Decoding Reveals Plasticity in V3A as a Result of Motion Perceptual Learning

    PubMed Central

    Shibata, Kazuhisa; Chang, Li-Hung; Kim, Dongho; Náñez, José E.; Kamitani, Yukiyasu; Watanabe, Takeo; Sasaki, Yuka

    2012-01-01

    Visual perceptual learning (VPL) is defined as visual performance improvement after visual experiences. VPL is often highly specific for a visual feature presented during training. Such specificity is observed in behavioral tuning function changes with the highest improvement centered on the trained feature and was originally thought to be evidence for changes in the early visual system associated with VPL. However, results of neurophysiological studies have been highly controversial concerning whether the plasticity underlying VPL occurs within the visual cortex. The controversy may be partially due to the lack of observation of neural tuning function changes in multiple visual areas in association with VPL. Here using human subjects we systematically compared behavioral tuning function changes after global motion detection training with decoded tuning function changes for 8 visual areas using pattern classification analysis on functional magnetic resonance imaging (fMRI) signals. We found that the behavioral tuning function changes were extremely highly correlated to decoded tuning function changes only in V3A, which is known to be highly responsive to global motion with human subjects. We conclude that VPL of a global motion detection task involves plasticity in a specific visual cortical area. PMID:22952849

  2. Simple summation rule for optimal fixation selection in visual search.

    PubMed

    Najemnik, Jiri; Geisler, Wilson S

    2009-06-01

    When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.

  3. Infrared dim and small target detecting and tracking method inspired by Human Visual System

    NASA Astrophysics Data System (ADS)

    Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian

    2014-01-01

    Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.

  4. Visual mismatch negativity indicates automatic, task-independent detection of artistic image composition in abstract artworks.

    PubMed

    Menzel, Claudia; Kovács, Gyula; Amado, Catarina; Hayn-Leichsenring, Gregor U; Redies, Christoph

    2018-05-06

    In complex abstract art, image composition (i.e., the artist's deliberate arrangement of pictorial elements) is an important aesthetic feature. We investigated whether the human brain detects image composition in abstract artworks automatically (i.e., independently of the experimental task). To this aim, we studied whether a group of 20 original artworks elicited a visual mismatch negativity when contrasted with a group of 20 images that were composed of the same pictorial elements as the originals, but in shuffled arrangements, which destroy artistic composition. We used a passive oddball paradigm with parallel electroencephalogram recordings to investigate the detection of image type-specific properties. We observed significant deviant-standard differences for the shuffled and original images, respectively. Furthermore, for both types of images, differences in amplitudes correlated with the behavioral ratings of the images. In conclusion, we show that the human brain can detect composition-related image properties in visual artworks in an automatic fashion. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Bioelectronic nose and its application to smell visualization.

    PubMed

    Ko, Hwi Jin; Park, Tai Hyun

    2016-01-01

    There have been many trials to visualize smell using various techniques in order to objectively express the smell because information obtained from the sense of smell in human is very subjective. So far, well-trained experts such as a perfumer, complex and large-scale equipment such as GC-MS, and an electronic nose have played major roles in objectively detecting and recognizing odors. Recently, an optoelectronic nose was developed to achieve this purpose, but some limitations regarding the sensitivity and the number of smells that can be visualized still persist. Since the elucidation of the olfactory mechanism, numerous researches have been accomplished for the development of a sensing device by mimicking human olfactory system. Engineered olfactory cells were constructed to mimic the human olfactory system, and the use of engineered olfactory cells for smell visualization has been attempted with the use of various methods such as calcium imaging, CRE reporter assay, BRET, and membrane potential assay; however, it is not easy to consistently control the condition of cells and it is impossible to detect low odorant concentration. Recently, the bioelectronic nose was developed, and much improved along with the improvement of nano-biotechnology. The bioelectronic nose consists of the following two parts: primary transducer and secondary transducer. Biological materials as a primary transducer improved the selectivity of the sensor, and nanomaterials as a secondary transducer increased the sensitivity. Especially, the bioelectronic noses using various nanomaterials combined with human olfactory receptors or nanovesicles derived from engineered olfactory cells have a potential which can detect almost all of the smells recognized by human because an engineered olfactory cell might be able to express any human olfactory receptor as well as can mimic human olfactory system. Therefore, bioelectronic nose will be a potent tool for smell visualization, but only if two technologies are completed. First, a multi-channel array-sensing system has to be applied for the integration of all of the olfactory receptors into a single chip for mimicking the performance of human nose. Second, the processing technique of the multi-channel system signals should be simultaneously established with the conversion of the signals to visual images. With the use of this latest sensing technology, the realization of a proper smell-visualization technology is expected in the near future.

  6. Auditory Detection of the Human Brainstem Auditory Evoked Response.

    ERIC Educational Resources Information Center

    Kidd, Gerald, Jr.; And Others

    1993-01-01

    This study evaluated whether listeners can distinguish human brainstem auditory evoked responses elicited by acoustic clicks from control waveforms obtained with no acoustic stimulus when the waveforms are presented auditorily. Detection performance for stimuli presented visually was slightly, but consistently, superior to that which occurred for…

  7. Neural Dynamics Underlying Target Detection in the Human Brain

    PubMed Central

    Bansal, Arjun K.; Madhavan, Radhika; Agam, Yigal; Golby, Alexandra; Madsen, Joseph R.

    2014-01-01

    Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses. PMID:24553944

  8. Bionic Vision-Based Intelligent Power Line Inspection System

    PubMed Central

    Ma, Yunpeng; He, Feijia; Xu, Jinxin

    2017-01-01

    Detecting the threats of the external obstacles to the power lines can ensure the stability of the power system. Inspired by the attention mechanism and binocular vision of human visual system, an intelligent power line inspection system is presented in this paper. Human visual attention mechanism in this intelligent inspection system is used to detect and track power lines in image sequences according to the shape information of power lines, and the binocular visual model is used to calculate the 3D coordinate information of obstacles and power lines. In order to improve the real time and accuracy of the system, we propose a new matching strategy based on the traditional SURF algorithm. The experimental results show that the system is able to accurately locate the position of the obstacles around power lines automatically, and the designed power line inspection system is effective in complex backgrounds, and there are no missing detection instances under different conditions. PMID:28203269

  9. Ventral and Dorsal Visual Stream Contributions to the Perception of Object Shape and Object Location

    PubMed Central

    Zachariou, Valentinos; Klatzky, Roberta; Behrmann, Marlene

    2017-01-01

    Growing evidence suggests that the functional specialization of the two cortical visual pathways may not be as distinct as originally proposed. Here, we explore possible contributions of the dorsal “where/how” visual stream to shape perception and, conversely, contributions of the ventral “what” visual stream to location perception in human adults. Participants performed a shape detection task and a location detection task while undergoing fMRI. For shape detection, comparable BOLD activation in the ventral and dorsal visual streams was observed, and the magnitude of this activation was correlated with behavioral performance. For location detection, cortical activation was significantly stronger in the dorsal than ventral visual pathway and did not correlate with the behavioral outcome. This asymmetry in cortical profile across tasks is particularly noteworthy given that the visual input was identical and that the tasks were matched for difficulty in performance. We confirmed the asymmetry in a subsequent psychophysical experiment in which participants detected changes in either object location or shape, while ignoring the other, task-irrelevant dimension. Detection of a location change was slowed by an irrelevant shape change matched for difficulty, but the reverse did not hold. We conclude that both ventral and dorsal visual streams contribute to shape perception, but that location processing appears to be essentially a function of the dorsal visual pathway. PMID:24001005

  10. Blindsight and Unconscious Vision: What They Teach Us about the Human Visual System

    PubMed Central

    Ajina, Sara; Bridge, Holly

    2017-01-01

    Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system. PMID:27777337

  11. Qualitative similarities in the visual short-term memory of pigeons and people.

    PubMed

    Gibson, Brett; Wasserman, Edward; Luck, Steven J

    2011-10-01

    Visual short-term memory plays a key role in guiding behavior, and individual differences in visual short-term memory capacity are strongly predictive of higher cognitive abilities. To provide a broader evolutionary context for understanding this memory system, we directly compared the behavior of pigeons and humans on a change detection task. Although pigeons had a lower storage capacity and a higher lapse rate than humans, both species stored multiple items in short-term memory and conformed to the same basic performance model. Thus, despite their very different evolutionary histories and neural architectures, pigeons and humans have functionally similar visual short-term memory systems, suggesting that the functional properties of visual short-term memory are subject to similar selective pressures across these distant species.

  12. The "social" and "interpersonal" body in spatial cognition. The role of agency and interagency.

    PubMed

    Crivelli, Davide; Balconi, Michela

    2015-09-01

    In order to interact effectively, we need to represent our action as produced by human beings. According to direct access theories, the first steps of visual information processing offer us an informed direct grasp of the situation, especially when social and interpersonal components are implicated. Biological system detection may be the gateway of such smart processes and then may influence initial stages of perception fostering adaptive social behaviour. To investigate early neural correlates of human agency detection in ecological situations with more high or low social impact, we compared scenes showing a human versus artificial agent interacting with a human agent. Twenty volunteers participated in the study. They were asked to observe dynamic visual stimuli showing realistic interactions. ERP (event-related potentials) were recorded. Each stimulus depicted an arm executing a gesture addressed to a human agent. Visual features of the arm were manipulated: in half of trials, it was real; in other trials, it was deprived of some details and transformed in a statue-like arm. EEG morphological analysis revealed an early negative deflection peaking at about 155 ms. Peak amplitude data have been statistically analysed by repeated-measures ANOVAs. It was found that the peak was ampler in the left inferior posterior region when the gesturing arm was human. The early negative deflection, N150, which we found to be different between the human and artificial conditions, is presumably associated with human agency detection in high interpersonal context.

  13. Camouflage and visual perception

    PubMed Central

    Troscianko, Tom; Benton, Christopher P.; Lovell, P. George; Tolhurst, David J.; Pizlo, Zygmunt

    2008-01-01

    How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object encoding, and of search. In most cases, the evidence base comes from studies of humans or species whose vision approximates to that of humans. The effort is hampered by a relatively sparse literature on visual function in natural environments and with complex foraging tasks. However, some general constraints emerge as being potentially powerful principles in understanding concealment—a ‘constraint’ here means a set of simplifying assumptions. Strategies that disrupt the unambiguous encoding of discontinuities of intensity (edges), and of other key visual attributes, such as motion, are key here. Similar strategies may also defeat grouping and object-encoding mechanisms. Finally, the paper considers how we may understand the processes of search for complex targets in complex scenes. The aim is to provide a number of pointers towards issues, which may be of assistance in understanding camouflage and concealment, particularly with reference to how visual systems can detect the shape of complex, concealed objects. PMID:18990671

  14. Detectable Warnings : Detectability by Individuals with Visual Impairments, and Safety and Negotiability on Slopes for Persons with Physical Impairment

    DOT National Transportation Integrated Search

    1994-09-01

    This report presents the results of research on human performance on detectable warning surfaces. The first portion of the report presents an evaluation of the underfoot detectability of nine detectable warning surfaces for persons having varied phys...

  15. Manipulation of Pre-Target Activity on the Right Frontal Eye Field Enhances Conscious Visual Perception in Humans

    PubMed Central

    Chanes, Lorena; Chica, Ana B.; Quentin, Romain; Valero-Cabré, Antoni

    2012-01-01

    The right Frontal Eye Field (FEF) is a region of the human brain, which has been consistently involved in visuo-spatial attention and access to consciousness. Nonetheless, the extent of this cortical site’s ability to influence specific aspects of visual performance remains debated. We hereby manipulated pre-target activity on the right FEF and explored its influence on the detection and categorization of low-contrast near-threshold visual stimuli. Our data show that pre-target frontal neurostimulation has the potential when used alone to induce enhancements of conscious visual detection. More interestingly, when FEF stimulation was combined with visuo-spatial cues, improvements remained present only for trials in which the cue correctly predicted the location of the subsequent target. Our data provide evidence for the causal role of the right FEF pre-target activity in the modulation of human conscious vision and reveal the dependence of such neurostimulatory effects on the state of activity set up by cue validity in the dorsal attentional orienting network. PMID:22615759

  16. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  17. Dangerous animals capture and maintain attention in humans.

    PubMed

    Yorzinski, Jessica L; Penkunas, Michael J; Platt, Michael L; Coss, Richard G

    2014-05-28

    Predation is a major source of natural selection on primates and may have shaped attentional processes that allow primates to rapidly detect dangerous animals. Because ancestral humans were subjected to predation, a process that continues at very low frequencies, we examined the visual processes by which men and women detect dangerous animals (snakes and lions). We recorded the eye movements of participants as they detected images of a dangerous animal (target) among arrays of nondangerous animals (distractors) as well as detected images of a nondangerous animal (target) among arrays of dangerous animals (distractors). We found that participants were quicker to locate targets when the targets were dangerous animals compared with nondangerous animals, even when spatial frequency and luminance were controlled. The participants were slower to locate nondangerous targets because they spent more time looking at dangerous distractors, a process known as delayed disengagement, and looked at a larger number of dangerous distractors. These results indicate that dangerous animals capture and maintain attention in humans, suggesting that historical predation has shaped some facets of visual orienting and its underlying neural architecture in modern humans.

  18. Development of a Guide-Dog Robot: Leading and Recognizing a Visually-Handicapped Person using a LRF

    NASA Astrophysics Data System (ADS)

    Saegusa, Shozo; Yasuda, Yuya; Uratani, Yoshitaka; Tanaka, Eiichirou; Makino, Toshiaki; Chang, Jen-Yuan (James

    A conceptual Guide-Dog Robot prototype to lead and to recognize a visually-handicapped person is developed and discussed in this paper. Key design features of the robot include a movable platform, human-machine interface, and capability of avoiding obstacles. A novel algorithm enabling the robot to recognize its follower's locomotion as well to detect the center of corridor is proposed and implemented in the robot's human-machine interface. It is demonstrated that using the proposed novel leading and detecting algorithm along with a rapid scanning laser range finder (LRF) sensor, the robot is able to successfully and effectively lead a human walking in corridor without running into obstacles such as trash boxes or adjacent walking persons. Position and trajectory of the robot leading a human maneuvering in common corridor environment are measured by an independent LRF observer. The measured data suggest that the proposed algorithms are effective to enable the robot to detect center of the corridor and position of its follower correctly.

  19. Embodiments of Human Identity: Detecting and Interpreting Hidden Narratives in Twentieth-Century Design History.

    ERIC Educational Resources Information Center

    Williamson, Jack

    1995-01-01

    Argues that the practice and influence of design history can benefit from new forms of visual and chronological analysis. Identifies and discusses a unique phenomenon, the "historical visual narrative." Examines special instances of this phenomenon in twentieth-century design and visual culture, which are tied to the theme of the…

  20. Spatial Probability Dynamically Modulates Visual Target Detection in Chickens

    PubMed Central

    Sridharan, Devarajan; Ramamurthy, Deepa L.; Knudsen, Eric I.

    2013-01-01

    The natural world contains a rich and ever-changing landscape of sensory information. To survive, an organism must be able to flexibly and rapidly locate the most relevant sources of information at any time. Humans and non-human primates exploit regularities in the spatial distribution of relevant stimuli (targets) to improve detection at locations of high target probability. Is the ability to flexibly modify behavior based on visual experience unique to primates? Chickens (Gallus domesticus) were trained on a multiple alternative Go/NoGo task to detect a small, briefly-flashed dot (target) in each of the quadrants of the visual field. When targets were presented with equal probability (25%) in each quadrant, chickens exhibited a distinct advantage for detecting targets at lower, relative to upper, hemifield locations. Increasing the probability of presentation in the upper hemifield locations (to 80%) dramatically improved detection performance at these locations to be on par with lower hemifield performance. Finally, detection performance in the upper hemifield changed on a rapid timescale, improving with successive target detections, and declining with successive detections at the diagonally opposite location in the lower hemifield. These data indicate the action of a process that in chickens, as in primates, flexibly and dynamically modulates detection performance based on the spatial probabilities of sensory stimuli as well as on recent performance history. PMID:23734188

  1. Man-systems evaluation of moving base vehicle simulation motion cues. [human acceleration perception involving visual feedback

    NASA Technical Reports Server (NTRS)

    Kirkpatrick, M.; Brye, R. G.

    1974-01-01

    A motion cue investigation program is reported that deals with human factor aspects of high fidelity vehicle simulation. General data on non-visual motion thresholds and specific threshold values are established for use as washout parameters in vehicle simulation. A general purpose similator is used to test the contradictory cue hypothesis that acceleration sensitivity is reduced during a vehicle control task involving visual feedback. The simulator provides varying acceleration levels. The method of forced choice is based on the theory of signal detect ability.

  2. Transcranial Random Noise Stimulation of Visual Cortex: Stochastic Resonance Enhances Central Mechanisms of Perception.

    PubMed

    van der Groen, Onno; Wenderoth, Nicole

    2016-05-11

    Random noise enhances the detectability of weak signals in nonlinear systems, a phenomenon known as stochastic resonance (SR). Though counterintuitive at first, SR has been demonstrated in a variety of naturally occurring processes, including human perception, where it has been shown that adding noise directly to weak visual, tactile, or auditory stimuli enhances detection performance. These results indicate that random noise can push subthreshold receptor potentials across the transfer threshold, causing action potentials in an otherwise silent afference. Despite the wealth of evidence demonstrating SR for noise added to a stimulus, relatively few studies have explored whether or not noise added directly to cortical networks enhances sensory detection. Here we administered transcranial random noise stimulation (tRNS; 100-640 Hz zero-mean Gaussian white noise) to the occipital region of human participants. For increasing tRNS intensities (ranging from 0 to 1.5 mA), the detection accuracy of a visual stimuli changed according to an inverted-U-shaped function, typical of the SR phenomenon. When the optimal level of noise was added to visual cortex, detection performance improved significantly relative to a zero noise condition (9.7 ± 4.6%) and to a similar extent as optimal noise added to the visual stimuli (11.2 ± 4.7%). Our results demonstrate that adding noise to cortical networks can improve human behavior and that tRNS is an appropriate tool to exploit this mechanism. Our findings suggest that neural processing at the network level exhibits nonlinear system properties that are sensitive to the stochastic resonance phenomenon and highlight the usefulness of tRNS as a tool to modulate human behavior. Since tRNS can be applied to all cortical areas, exploiting the SR phenomenon is not restricted to the perceptual domain, but can be used for other functions that depend on nonlinear neural dynamics (e.g., decision making, task switching, response inhibition, and many other processes). This will open new avenues for using tRNS to investigate brain function and enhance the behavior of healthy individuals or patients. Copyright © 2016 the authors 0270-6474/16/365289-10$15.00/0.

  3. Perceptual Learning Selectively Refines Orientation Representations in Early Visual Cortex

    PubMed Central

    Jehee, Janneke F.M.; Ling, Sam; Swisher, Jascha D.; van Bergen, Ruben S.; Tong, Frank

    2013-01-01

    Although practice has long been known to improve perceptual performance, the neural basis of this improvement in humans remains unclear. Using fMRI in conjunction with a novel signal detection-based analysis, we show that extensive practice selectively enhances the neural representation of trained orientations in the human visual cortex. Twelve observers practiced discriminating small changes in the orientation of a laterally presented grating over 20 or more daily one-hour training sessions. Training on average led to a two-fold improvement in discrimination sensitivity, specific to the trained orientation and the trained location, with minimal improvement found for untrained orthogonal orientations or for orientations presented in the untrained hemifield. We measured the strength of orientation-selective responses in individual voxels in early visual areas (V1–V4) using signal detection measures, both pre- and post-training. Although the overall amplitude of the BOLD response was no greater after training, practice nonetheless specifically enhanced the neural representation of the trained orientation at the trained location. This training-specific enhancement of orientation-selective responses was observed in the primary visual cortex (V1) as well as higher extrastriate visual areas V2–V4, and moreover, reliably predicted individual differences in the behavioral effects of perceptual learning. These results demonstrate that extensive training can lead to targeted functional reorganization of the human visual cortex, refining the cortical representation of behaviorally relevant information. PMID:23175828

  4. Perceptual learning selectively refines orientation representations in early visual cortex.

    PubMed

    Jehee, Janneke F M; Ling, Sam; Swisher, Jascha D; van Bergen, Ruben S; Tong, Frank

    2012-11-21

    Although practice has long been known to improve perceptual performance, the neural basis of this improvement in humans remains unclear. Using fMRI in conjunction with a novel signal detection-based analysis, we show that extensive practice selectively enhances the neural representation of trained orientations in the human visual cortex. Twelve observers practiced discriminating small changes in the orientation of a laterally presented grating over 20 or more daily 1 h training sessions. Training on average led to a twofold improvement in discrimination sensitivity, specific to the trained orientation and the trained location, with minimal improvement found for untrained orthogonal orientations or for orientations presented in the untrained hemifield. We measured the strength of orientation-selective responses in individual voxels in early visual areas (V1-V4) using signal detection measures, both before and after training. Although the overall amplitude of the BOLD response was no greater after training, practice nonetheless specifically enhanced the neural representation of the trained orientation at the trained location. This training-specific enhancement of orientation-selective responses was observed in the primary visual cortex (V1) as well as higher extrastriate visual areas V2-V4, and moreover, reliably predicted individual differences in the behavioral effects of perceptual learning. These results demonstrate that extensive training can lead to targeted functional reorganization of the human visual cortex, refining the cortical representation of behaviorally relevant information.

  5. Late maturation of visual spatial integration in humans

    PubMed Central

    Kovács, Ilona; Kozma, Petra; Fehér, Ákos; Benedek, György

    1999-01-01

    Visual development is thought to be completed at an early age. We suggest that the maturation of the visual brain is not homogeneous: functions with greater need for early availability, such as visuomotor control, mature earlier, and the development of other visual functions may extend well into childhood. We found significant improvement in children between 5 and 14 years in visual spatial integration by using a contour-detection task. The data show that long-range spatial interactions—subserving the integration of orientational information across the visual field—span a shorter spatial range in children than in adults. Performance in the task improves in a cue-specific manner with practice, which indicates the participation of fairly low-level perceptual mechanisms. We interpret our findings in terms of a protracted development of ventral visual-stream function in humans. PMID:10518600

  6. A bottom-up model of spatial attention predicts human error patterns in rapid scene recognition.

    PubMed

    Einhäuser, Wolfgang; Mundhenk, T Nathan; Baldi, Pierre; Koch, Christof; Itti, Laurent

    2007-07-20

    Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom-up visual processing (attentional selection and/or recognition) or top-down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of "surprise" in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences.

  7. Carbon nanotube-based labels for highly sensitive colorimetric and aggregation-based visual detection of nucleic acids

    NASA Astrophysics Data System (ADS)

    Lee, Ai Cheng; Ye, Jian-Shan; Ngin Tan, Swee; Poenar, Daniel P.; Sheu, Fwu-Shan; Kiat Heng, Chew; Meng Lim, Tit

    2007-11-01

    A novel carbon nanotube (CNT) derived label capable of dramatic signal amplification of nucleic acid detection and direct visual detection of target hybridization has been developed. Highly sensitive colorimetric detection of human acute lymphocytic leukemia (ALL) related oncogene sequences amplified by the novel CNT-based label was demonstrated. Atomic force microscope (AFM) images confirmed that a monolayer of horseradish peroxidase and detection probe molecules was immobilized along the carboxylated CNT carrier. The resulting CNT labels significantly enhanced the nucleic acid assay sensitivity by at least 1000 times compared to that of conventional labels used in enzyme-linked oligosorbent assay (ELOSA). An excellent detection limit of 1 × 10-12 M (60 × 10-18 mol in 60 µl) and a four-order wide dynamic range of target concentration were achieved. Hybridizations using these labels were coupled to a concentration-dependent formation of visible dark aggregates. Targets can thus be detected simply with visual inspection, eliminating the need for expensive and sophisticated detection systems. The approach holds promise for ultrasensitive and low cost visual inspection and colorimetric nucleic acid detection in point-of-care and early disease diagnostic application.

  8. Complete scanpaths analysis toolbox.

    PubMed

    Augustyniak, Piotr; Mikrut, Zbigniew

    2006-01-01

    This paper presents a complete open software environment for control, data processing and assessment of visual experiments. Visual experiments are widely used in research on human perception physiology and the results are applicable to various visual information-based man-machine interfacing, human-emulated automatic visual systems or scanpath-based learning of perceptual habits. The toolbox is designed for Matlab platform and supports infra-red reflection-based eyetracker in calibration and scanpath analysis modes. Toolbox procedures are organized in three layers: the lower one, communicating with the eyetracker output file, the middle detecting scanpath events on a physiological background and the one upper consisting of experiment schedule scripts, statistics and summaries. Several examples of visual experiments carried out with use of the presented toolbox complete the paper.

  9. Perceptual learning increases the strength of the earliest signals in visual cortex.

    PubMed

    Bao, Min; Yang, Lin; Rios, Cristina; He, Bin; Engel, Stephen A

    2010-11-10

    Training improves performance on most visual tasks. Such perceptual learning can modify how information is read out from, and represented in, later visual areas, but effects on early visual cortex are controversial. In particular, it remains unknown whether learning can reshape neural response properties in early visual areas independent from feedback arising in later cortical areas. Here, we tested whether learning can modify feedforward signals in early visual cortex as measured by the human electroencephalogram. Fourteen subjects were trained for >24 d to detect a diagonal grating pattern in one quadrant of the visual field. Training improved performance, reducing the contrast needed for reliable detection, and also reliably increased the amplitude of the earliest component of the visual evoked potential, the C1. Control orientations and locations showed smaller effects of training. Because the C1 arises rapidly and has a source in early visual cortex, our results suggest that learning can increase early visual area response through local receptive field changes without feedback from later areas.

  10. Using Saliency-Weighted Disparity Statistics for Objective Visual Comfort Assessment of Stereoscopic Images

    NASA Astrophysics Data System (ADS)

    Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing

    2016-06-01

    Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.

  11. A signal detection model predicts the effects of set size on visual search accuracy for feature, conjunction, triple conjunction, and disjunction displays

    NASA Technical Reports Server (NTRS)

    Eckstein, M. P.; Thomas, J. P.; Palmer, J.; Shimozaki, S. S.

    2000-01-01

    Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.

  12. Rapid Detection of Visually Provocative Animals by Preschool Children and Adults

    ERIC Educational Resources Information Center

    Penkunas, Michael J.; Coss, Richard G.

    2013-01-01

    The ability to detect dangerous animals rapidly in complex landscapes has been historically important during human evolution. Previous research has shown that snake images are more readily detected than images of benign animals. To provide a stringent test of superior snake detection in preschool children and adults, Experiment 1 consisted of two…

  13. Model Cortical Association Fields Account for the Time Course and Dependence on Target Complexity of Human Contour Perception

    PubMed Central

    Gintautas, Vadas; Ham, Michael I.; Kunsberg, Benjamin; Barr, Shawn; Brumby, Steven P.; Rasmussen, Craig; George, John S.; Nemenman, Ilya; Bettencourt, Luís M. A.; Kenyon, Garret T.

    2011-01-01

    Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas. PMID:21998562

  14. Eye Detection and Tracking for Intelligent Human Computer Interaction

    DTIC Science & Technology

    2006-02-01

    Meer and I. Weiss, “Smoothed Differentiation Filters for Images”, Journal of Visual Communication and Image Representation, 3(1):58-72, 1992. [13...25] P. Meer and I. Weiss. “Smoothed differentiation filters for images”. Journal of Visual Communication and Image Representation, 3(1), 1992

  15. The Influence of Texture Symmetry in Marker Pointing:. Experimenting with Humans and Algorithms

    NASA Astrophysics Data System (ADS)

    Cardaci, M.; Tabacchi, M. E.

    2012-12-01

    Symmetry plays a fundamental role in aiding the visual system, to organize its environmental stimuli and to detect visual patterns of natural and artificial objects. Various kinds of symmetry exist, and we will discuss how internal symmetry due to textures influences the choice of direction in visual tasks. Two experiments are presented: the first, with human subjects, deals with the effect of textures on preferences for a pointing direction. The second emulates the performances obtained in the first through the use of an algorithm based on a physic metaphor. Results from both experiments are shown and comment.

  16. Neurotechnology for intelligence analysts

    NASA Astrophysics Data System (ADS)

    Kruse, Amy A.; Boyd, Karen C.; Schulman, Joshua J.

    2006-05-01

    Geospatial Intelligence Analysts are currently faced with an enormous volume of imagery, only a fraction of which can be processed or reviewed in a timely operational manner. Computer-based target detection efforts have failed to yield the speed, flexibility and accuracy of the human visual system. Rather than focus solely on artificial systems, we hypothesize that the human visual system is still the best target detection apparatus currently in use, and with the addition of neuroscience-based measurement capabilities it can surpass the throughput of the unaided human severalfold. Using electroencephalography (EEG), Thorpe et al1 described a fast signal in the brain associated with the early detection of targets in static imagery using a Rapid Serial Visual Presentation (RSVP) paradigm. This finding suggests that it may be possible to extract target detection signals from complex imagery in real time utilizing non-invasive neurophysiological assessment tools. To transform this phenomenon into a capability for defense applications, the Defense Advanced Research Projects Agency (DARPA) currently is sponsoring an effort titled Neurotechnology for Intelligence Analysts (NIA). The vision of the NIA program is to revolutionize the way that analysts handle intelligence imagery, increasing both the throughput of imagery to the analyst and overall accuracy of the assessments. Successful development of a neurobiologically-based image triage system will enable image analysts to train more effectively and process imagery with greater speed and precision.

  17. Ultrasensitive Visual Detection of HIV DNA Biomarkers via a Multi-amplification Nanoplatform.

    PubMed

    Long, Yuyin; Zhou, Cuisong; Wang, Congmin; Cai, Honglian; Yin, Cuiyun; Yang, Qiufang; Xiao, Dan

    2016-04-01

    Methodologies to detect disease biomarkers at ultralow concentrations can potentially improve the standard of living. A facile and label-free multi-amplification strategy is proposed for the ultrasensitive visual detection of HIV DNA biomarkers in real physiological media. This multi-amplification strategy not only exhibits a signficantly low detection limit down to 4.8 pM but also provides a label-free, cost-effective and facile technique for visualizing a few molecules of nucleic acid analyte with the naked eye. Importantly, the biosensor is capable of discriminating single-based mismatch lower than 5.0 nM in human serum samples. Moreover, the visual sensing platform exhibits excellent specificity, acceptable reusability and a long-term stability. All these advantages could be attributed to the nanofibrous sensing platform that 1) has a high surface-area-to-volume provided by electrospun nanofibrous membrane, and 2) combines glucose oxidase (GOx) biocatalysis, DNAzyme-catalyzed colorimetric reaction and catalytic hairpin assembly (CHA) recycling amplification together. This multi-amplification nanoplatform promises label-free and visual single-based mismatch DNA monitoring with high sensitivity and specificity, suggesting wide applications that range from virus detection to genetic disease diagnosis.

  18. Cholinergic, But Not Dopaminergic or Noradrenergic, Enhancement Sharpens Visual Spatial Perception in Humans

    PubMed Central

    Wallace, Deanna L.

    2017-01-01

    The neuromodulator acetylcholine modulates spatial integration in visual cortex by altering the balance of inputs that generate neuronal receptive fields. These cholinergic effects may provide a neurobiological mechanism underlying the modulation of visual representations by visual spatial attention. However, the consequences of cholinergic enhancement on visuospatial perception in humans are unknown. We conducted two experiments to test whether enhancing cholinergic signaling selectively alters perceptual measures of visuospatial interactions in human subjects. In Experiment 1, a double-blind placebo-controlled pharmacology study, we measured how flanking distractors influenced detection of a small contrast decrement of a peripheral target, as a function of target-flanker distance. We found that cholinergic enhancement with the cholinesterase inhibitor donepezil improved target detection, and modeling suggested that this was mainly due to a narrowing of the extent of facilitatory perceptual spatial interactions. In Experiment 2, we tested whether these effects were selective to the cholinergic system or would also be observed following enhancements of related neuromodulators dopamine or norepinephrine. Unlike cholinergic enhancement, dopamine (bromocriptine) and norepinephrine (guanfacine) manipulations did not improve performance or systematically alter the spatial profile of perceptual interactions between targets and distractors. These findings reveal mechanisms by which cholinergic signaling influences visual spatial interactions in perception and improves processing of a visual target among distractors, effects that are notably similar to those of spatial selective attention. SIGNIFICANCE STATEMENT Acetylcholine influences how visual cortical neurons integrate signals across space, perhaps providing a neurobiological mechanism for the effects of visual selective attention. However, the influence of cholinergic enhancement on visuospatial perception remains unknown. Here we demonstrate that cholinergic enhancement improves detection of a target flanked by distractors, consistent with sharpened visuospatial perceptual representations. Furthermore, whereas most pharmacological studies focus on a single neurotransmitter, many neuromodulators can have related effects on cognition and perception. Thus, we also demonstrate that enhancing noradrenergic and dopaminergic systems does not systematically improve visuospatial perception or alter its tuning. Our results link visuospatial tuning effects of acetylcholine at the neuronal and perceptual levels and provide insights into the connection between cholinergic signaling and visual attention. PMID:28336568

  19. An objective method for measuring face detection thresholds using the sweep steady-state visual evoked response

    PubMed Central

    Ales, Justin M.; Farzin, Faraz; Rossion, Bruno; Norcia, Anthony M.

    2012-01-01

    We introduce a sensitive method for measuring face detection thresholds rapidly, objectively, and independently of low-level visual cues. The method is based on the swept parameter steady-state visual evoked potential (ssVEP), in which a stimulus is presented at a specific temporal frequency while parametrically varying (“sweeping”) the detectability of the stimulus. Here, the visibility of a face image was increased by progressive derandomization of the phase spectra of the image in a series of equally spaced steps. Alternations between face and fully randomized images at a constant rate (3/s) elicit a robust first harmonic response at 3 Hz specific to the structure of the face. High-density EEG was recorded from 10 human adult participants, who were asked to respond with a button-press as soon as they detected a face. The majority of participants produced an evoked response at the first harmonic (3 Hz) that emerged abruptly between 30% and 35% phase-coherence of the face, which was most prominent on right occipito-temporal sites. Thresholds for face detection were estimated reliably in single participants from 15 trials, or on each of the 15 individual face trials. The ssVEP-derived thresholds correlated with the concurrently measured perceptual face detection thresholds. This first application of the sweep VEP approach to high-level vision provides a sensitive and objective method that could be used to measure and compare visual perception thresholds for various object shapes and levels of categorization in different human populations, including infants and individuals with developmental delay. PMID:23024355

  20. Rethinking Visual Analytics for Streaming Data Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crouser, R. Jordan; Franklin, Lyndsey; Cook, Kris

    In the age of data science, the use of interactive information visualization techniques has become increasingly ubiquitous. From online scientific journals to the New York Times graphics desk, the utility of interactive visualization for both storytelling and analysis has become ever more apparent. As these techniques have become more readily accessible, the appeal of combining interactive visualization with computational analysis continues to grow. Arising out of a need for scalable, human-driven analysis, primary objective of visual analytics systems is to capitalize on the complementary strengths of human and machine analysis, using interactive visualization as a medium for communication between themore » two. These systems leverage developments from the fields of information visualization, computer graphics, machine learning, and human-computer interaction to support insight generation in areas where purely computational analyses fall short. Over the past decade, visual analytics systems have generated remarkable advances in many historically challenging analytical contexts. These include areas such as modeling political systems [Crouser et al. 2012], detecting financial fraud [Chang et al. 2008], and cybersecurity [Harrison et al. 2012]. In each of these contexts, domain expertise and human intuition is a necessary component of the analysis. This intuition is essential to building trust in the analytical products, as well as supporting the translation of evidence into actionable insight. In addition, each of these examples also highlights the need for scalable analysis. In each case, it is infeasible for a human analyst to manually assess the raw information unaided, and the communication overhead to divide the task between a large number of analysts makes simple parallelism intractable. Regardless of the domain, visual analytics tools strive to optimize the allocation of human analytical resources, and to streamline the sensemaking process on data that is massive, complex, incomplete, and uncertain in scenarios requiring human judgment.« less

  1. Vision and foraging in cormorants: more like herons than hawks?

    PubMed

    White, Craig R; Day, Norman; Butler, Patrick J; Martin, Graham R

    2007-07-25

    Great cormorants (Phalacrocorax carbo L.) show the highest known foraging yield for a marine predator and they are often perceived to be in conflict with human economic interests. They are generally regarded as visually-guided, pursuit-dive foragers, so it would be expected that cormorants have excellent vision much like aerial predators, such as hawks which detect and pursue prey from a distance. Indeed cormorant eyes appear to show some specific adaptations to the amphibious life style. They are reported to have a highly pliable lens and powerful intraocular muscles which are thought to accommodate for the loss of corneal refractive power that accompanies immersion and ensures a well focussed image on the retina. However, nothing is known of the visual performance of these birds and how this might influence their prey capture technique. We measured the aquatic visual acuity of great cormorants under a range of viewing conditions (illuminance, target contrast, viewing distance) and found it to be unexpectedly poor. Cormorant visual acuity under a range of viewing conditions is in fact comparable to unaided humans under water, and very inferior to that of aerial predators. We present a prey detectability model based upon the known acuity of cormorants at different illuminances, target contrasts and viewing distances. This shows that cormorants are able to detect individual prey only at close range (less than 1 m). We conclude that cormorants are not the aquatic equivalent of hawks. Their efficient hunting involves the use of specialised foraging techniques which employ brief short-distance pursuit and/or rapid neck extension to capture prey that is visually detected or flushed only at short range. This technique appears to be driven proximately by the cormorant's limited visual capacities, and is analogous to the foraging techniques employed by herons.

  2. Pulvinar neurons reveal neurobiological evidence of past selection for rapid detection of snakes.

    PubMed

    Van Le, Quan; Isbell, Lynne A; Matsumoto, Jumpei; Nguyen, Minh; Hori, Etsuro; Maior, Rafael S; Tomaz, Carlos; Tran, Anh Hai; Ono, Taketoshi; Nishijo, Hisao

    2013-11-19

    Snakes and their relationships with humans and other primates have attracted broad attention from multiple fields of study, but not, surprisingly, from neuroscience, despite the involvement of the visual system and strong behavioral and physiological evidence that humans and other primates can detect snakes faster than innocuous objects. Here, we report the existence of neurons in the primate medial and dorsolateral pulvinar that respond selectively to visual images of snakes. Compared with three other categories of stimuli (monkey faces, monkey hands, and geometrical shapes), snakes elicited the strongest, fastest responses, and the responses were not reduced by low spatial filtering. These findings integrate neuroscience with evolutionary biology, anthropology, psychology, herpetology, and primatology by identifying a neurobiological basis for primates' heightened visual sensitivity to snakes, and adding a crucial component to the growing evolutionary perspective that snakes have long shaped our primate lineage.

  3. Pulvinar neurons reveal neurobiological evidence of past selection for rapid detection of snakes

    PubMed Central

    Van Le, Quan; Isbell, Lynne A.; Matsumoto, Jumpei; Nguyen, Minh; Hori, Etsuro; Maior, Rafael S.; Tomaz, Carlos; Tran, Anh Hai; Ono, Taketoshi; Nishijo, Hisao

    2013-01-01

    Snakes and their relationships with humans and other primates have attracted broad attention from multiple fields of study, but not, surprisingly, from neuroscience, despite the involvement of the visual system and strong behavioral and physiological evidence that humans and other primates can detect snakes faster than innocuous objects. Here, we report the existence of neurons in the primate medial and dorsolateral pulvinar that respond selectively to visual images of snakes. Compared with three other categories of stimuli (monkey faces, monkey hands, and geometrical shapes), snakes elicited the strongest, fastest responses, and the responses were not reduced by low spatial filtering. These findings integrate neuroscience with evolutionary biology, anthropology, psychology, herpetology, and primatology by identifying a neurobiological basis for primates’ heightened visual sensitivity to snakes, and adding a crucial component to the growing evolutionary perspective that snakes have long shaped our primate lineage. PMID:24167268

  4. Spatial and temporal coherence in perceptual binding

    PubMed Central

    Blake, Randolph; Yang, Yuede

    1997-01-01

    Component visual features of objects are registered by distributed patterns of activity among neurons comprising multiple pathways and visual areas. How these distributed patterns of activity give rise to unified representations of objects remains unresolved, although one recent, controversial view posits temporal coherence of neural activity as a binding agent. Motivated by the possible role of temporal coherence in feature binding, we devised a novel psychophysical task that requires the detection of temporal coherence among features comprising complex visual images. Results show that human observers can more easily detect synchronized patterns of temporal contrast modulation within hybrid visual images composed of two components when those components are drawn from the same original picture. Evidently, time-varying changes within spatially coherent features produce more salient neural signals. PMID:9192701

  5. Japanese monkeys (Macaca fuscata) quickly detect snakes but not spiders: Evolutionary origins of fear-relevant animals.

    PubMed

    Kawai, Nobuyuki; Koda, Hiroki

    2016-08-01

    Humans quickly detect the presence of evolutionary threats through visual perception. Many theorists have considered humans to be predisposed to respond to both snakes and spiders as evolutionarily fear-relevant stimuli. Evidence supports that human adults, children, and snake-naive monkeys all detect pictures of snakes among pictures of flowers more quickly than vice versa, but recent neurophysiological and behavioral studies suggest that spiders may, in fact, be processed similarly to nonthreat animals. The evidence of quick detection and rapid fear learning by primates is limited to snakes, and no such evidence exists for spiders, suggesting qualitative differences between fear of snakes and fear of spiders. Here, we show that snake-naive Japanese monkeys detect a single snake picture among 8 nonthreat animal pictures (koala) more quickly than vice versa; however, no such difference in detection was observed between spiders and pleasant animals. These robust differences between snakes and spiders are the most convincing evidence that the primate visual system is predisposed to pay attention to snakes but not spiders. These findings suggest that attentional bias toward snakes has an evolutionary basis but that bias toward spiders is more due to top-down, conceptually driven effects of emotion on attention capture. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Metacognitive Confidence Increases with, but Does Not Determine, Visual Perceptual Learning.

    PubMed

    Zizlsperger, Leopold; Kümmel, Florian; Haarmeier, Thomas

    2016-01-01

    While perceptual learning increases objective sensitivity, the effects on the constant interaction of the process of perception and its metacognitive evaluation have been rarely investigated. Visual perception has been described as a process of probabilistic inference featuring metacognitive evaluations of choice certainty. For visual motion perception in healthy, naive human subjects here we show that perceptual sensitivity and confidence in it increased with training. The metacognitive sensitivity-estimated from certainty ratings by a bias-free signal detection theoretic approach-in contrast, did not. Concomitant 3Hz transcranial alternating current stimulation (tACS) was applied in compliance with previous findings on effective high-low cross-frequency coupling subserving signal detection. While perceptual accuracy and confidence in it improved with training, there were no statistically significant tACS effects. Neither metacognitive sensitivity in distinguishing between their own correct and incorrect stimulus classifications, nor decision confidence itself determined the subjects' visual perceptual learning. Improvements of objective performance and the metacognitive confidence in it were rather determined by the perceptual sensitivity at the outset of the experiment. Post-decision certainty in visual perceptual learning was neither independent of objective performance, nor requisite for changes in sensitivity, but rather covaried with objective performance. The exact functional role of metacognitive confidence in human visual perception has yet to be determined.

  7. Aversive learning shapes neuronal orientation tuning in human visual cortex.

    PubMed

    McTeague, Lisa M; Gruss, L Forest; Keil, Andreas

    2015-07-28

    The responses of sensory cortical neurons are shaped by experience. As a result perceptual biases evolve, selectively facilitating the detection and identification of sensory events that are relevant for adaptive behaviour. Here we examine the involvement of human visual cortex in the formation of learned perceptual biases. We use classical aversive conditioning to associate one out of a series of oriented gratings with a noxious sound stimulus. After as few as two grating-sound pairings, visual cortical responses to the sound-paired grating show selective amplification. Furthermore, as learning progresses, responses to the orientations with greatest similarity to the sound-paired grating are increasingly suppressed, suggesting inhibitory interactions between orientation-selective neuronal populations. Changes in cortical connectivity between occipital and fronto-temporal regions mirror the changes in visuo-cortical response amplitudes. These findings suggest that short-term behaviourally driven retuning of human visual cortical neurons involves distal top-down projections as well as local inhibitory interactions.

  8. Visual detection of Ebola virus using reverse transcription loop-mediated isothermal amplification combined with nucleic acid strip detection.

    PubMed

    Xu, Changping; Wang, Hualei; Jin, Hongli; Feng, Na; Zheng, Xuexing; Cao, Zengguo; Li, Ling; Wang, Jianzhong; Yan, Feihu; Wang, Lina; Chi, Hang; Gai, Weiwei; Wang, Chong; Zhao, Yongkun; Feng, Yan; Wang, Tiecheng; Gao, Yuwei; Lu, Yiyu; Yang, Songtao; Xia, Xianzhu

    2016-05-01

    Ebola virus (species Zaire ebolavirus) (EBOV) is highly virulent in humans. The largest recorded outbreak of Ebola hemorrhagic fever in West Africa to date was caused by EBOV. Therefore, it is necessary to develop a detection method for this virus that can be easily distributed and implemented. In the current study, we developed a visual assay that can detect EBOV-associated nucleic acids. This assay combines reverse transcription loop-mediated isothermal amplification and nucleic acid strip detection (RT-LAMP-NAD). Nucleic acid amplification can be achieved in a one-step process at a constant temperature (58 °C, 35 min), and the amplified products can be visualized within 2-5 min using a nucleic acid strip detection device. The assay is capable of detecting 30 copies of artificial EBOV glycoprotein (GP) RNA and RNA encoding EBOV GP from 10(2) TCID50 recombinant viral particles per ml with high specificity. Overall, the RT-LAMP-NAD method is simple and has high sensitivity and specificity; therefore, it is especially suitable for the rapid detection of EBOV in African regions.

  9. Pigeon visual short-term memory directly compared to primates.

    PubMed

    Wright, Anthony A; Elmore, L Caitlin

    2016-02-01

    Three pigeons were trained to remember arrays of 2-6 colored squares and detect which of two squares had changed color to test their visual short-term memory. Procedures (e.g., stimuli, displays, viewing times, delays) were similar to those used to test monkeys and humans. Following extensive training, pigeons performed slightly better than similarly trained monkeys, but both animal species were considerably less accurate than humans with the same array sizes (2, 4 and 6 items). Pigeons and monkeys showed calculated memory capacities of one item or less, whereas humans showed a memory capacity of 2.5 items. Despite the differences in calculated memory capacities, the pigeons' memory results, like those from monkeys and humans, were all well characterized by an inverse power-law function fit to d' values for the five display sizes. This characterization provides a simple, straightforward summary of the fundamental processing of visual short-term memory (how visual short-term memory declines with memory load) that emphasizes species similarities based upon similar functional relationships. By closely matching pigeon testing parameters to those of monkeys and humans, these similar functional relationships suggest similar underlying processes of visual short-term memory in pigeons, monkeys and humans. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Age and Visual Information Processing.

    ERIC Educational Resources Information Center

    Gummerman, Kent; And Others

    This paper reports on three studies concerned with aspects of human visual information processing. Study I was an effort to measure the duration of iconic storage using a partial report method in children ranging in age from 6 to 13 years. Study II was designed to detect age related changes in the rate of processing (perceptually encoding) letters…

  11. Endogenous modulation of human visual cortex activity improves perception at twilight.

    PubMed

    Cordani, Lorenzo; Tagliazucchi, Enzo; Vetter, Céline; Hassemer, Christian; Roenneberg, Till; Stehle, Jörg H; Kell, Christian A

    2018-04-10

    Perception, particularly in the visual domain, is drastically influenced by rhythmic changes in ambient lighting conditions. Anticipation of daylight changes by the circadian system is critical for survival. However, the neural bases of time-of-day-dependent modulation in human perception are not yet understood. We used fMRI to study brain dynamics during resting-state and close-to-threshold visual perception repeatedly at six times of the day. Here we report that resting-state signal variance drops endogenously at times coinciding with dawn and dusk, notably in sensory cortices only. In parallel, perception-related signal variance in visual cortices decreases and correlates negatively with detection performance, identifying an anticipatory mechanism that compensates for the deteriorated visual signal quality at dawn and dusk. Generally, our findings imply that decreases in spontaneous neural activity improve close-to-threshold perception.

  12. The role of extra-foveal processing in 3D imaging

    NASA Astrophysics Data System (ADS)

    Eckstein, Miguel P.; Lago, Miguel A.; Abbey, Craig K.

    2017-03-01

    The field of medical image quality has relied on the assumption that metrics of image quality for simple visual detection tasks are a reliable proxy for the more clinically realistic visual search tasks. Rank order of signal detectability across conditions often generalizes from detection to search tasks. Here, we argue that search in 3D images represents a paradigm shift in medical imaging: radiologists typically cannot exhaustively scrutinize all regions of interest with the high acuity fovea requiring detection of signals with extra-foveal areas (visual periphery) of the human retina. We hypothesize that extra-foveal processing can alter the detectability of certain types of signals in medical images with important implications for search in 3D medical images. We compare visual search of two different types of signals in 2D vs. 3D images. We show that a small microcalcification-like signal is more highly detectable than a larger mass-like signal in 2D search, but its detectability largely decreases (relative to the larger signal) in the 3D search task. Utilizing measurements of observer detectability as a function retinal eccentricity and observer eye fixations we can predict the pattern of results in the 2D and 3D search studies. Our findings: 1) suggest that observer performance findings with 2D search might not always generalize to 3D search; 2) motivate the development of a new family of model observers that take into account the inhomogeneous visual processing across the retina (foveated model observers).

  13. Beyond visualization of big data: a multi-stage data exploration approach using visualization, sonification, and storification

    NASA Astrophysics Data System (ADS)

    Rimland, Jeffrey; Ballora, Mark; Shumaker, Wade

    2013-05-01

    As the sheer volume of data grows exponentially, it becomes increasingly difficult for existing visualization techniques to keep pace. The sonification field attempts to address this issue by enlisting our auditory senses to detect anomalies or complex events that are difficult to detect via visualization alone. Storification attempts to improve analyst understanding by converting data streams into organized narratives describing the data at a higher level of abstraction than the input stream that they area derived from. While these techniques hold a great deal of promise, they also each have a unique set of challenges that must be overcome. Sonification techniques must represent a broad variety of distributed heterogeneous data and present it to the analyst/listener in a manner that doesn't require extended listening - as visual "snapshots" are useful but auditory sounds only exist over time. Storification still faces many human-computer interface (HCI) challenges as well as technical hurdles related to automatically generating a logical narrative from lower-level data streams. This paper proposes a novel approach that utilizes a service oriented architecture (SOA)-based hybrid visualization/ sonification / storification framework to enable distributed human-in-the-loop processing of data in a manner that makes optimized usage of both visual and auditory processing pathways while also leveraging the value of narrative explication of data streams. It addresses the benefits and shortcomings of each processing modality and discusses information infrastructure and data representation concerns required with their utilization in a distributed environment. We present a generalizable approach with a broad range of applications including cyber security, medical informatics, facilitation of energy savings in "smart" buildings, and detection of natural and man-made disasters.

  14. Visual signal detection in structured backgrounds. II. Effects of contrast gain control, background variations, and white noise

    NASA Technical Reports Server (NTRS)

    Eckstein, M. P.; Ahumada, A. J. Jr; Watson, A. B.

    1997-01-01

    Studies of visual detection of a signal superimposed on one of two identical backgrounds show performance degradation when the background has high contrast and is similar in spatial frequency and/or orientation to the signal. To account for this finding, models include a contrast gain control mechanism that pools activity across spatial frequency, orientation and space to inhibit (divisively) the response of the receptor sensitive to the signal. In tasks in which the observer has to detect a known signal added to one of M different backgrounds grounds due to added visual noise, the main sources of degradation are the stochastic noise in the image and the suboptimal visual processing. We investigate how these two sources of degradation (contrast gain control and variations in the background) interact in a task in which the signal is embedded in one of M locations in a complex spatially varying background (structured background). We use backgrounds extracted from patient digital medical images. To isolate effects of the fixed deterministic background (the contrast gain control) from the effects of the background variations, we conduct detection experiments with three different background conditions: (1) uniform background, (2) a repeated sample of structured background, and (3) different samples of structured background. Results show that human visual detection degrades from the uniform background condition to the repeated background condition and degrades even further in the different backgrounds condition. These results suggest that both the contrast gain control mechanism and the background random variations degrade human performance in detection of a signal in a complex, spatially varying background. A filter model and added white noise are used to generate estimates of sampling efficiencies, an equivalent internal noise, an equivalent contrast-gain-control-induced noise, and an equivalent noise due to the variations in the structured background.

  15. Encoding of Target Detection during Visual Search by Single Neurons in the Human Brain.

    PubMed

    Wang, Shuo; Mamelak, Adam N; Adolphs, Ralph; Rutishauser, Ueli

    2018-06-08

    Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stimulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guidance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neurons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neurosurgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective response was invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ∼200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Visual-search model observer for assessing mass detection in CT

    NASA Astrophysics Data System (ADS)

    Karbaschi, Zohreh; Gifford, Howard C.

    2017-03-01

    Our aim is to devise model observers (MOs) to evaluate acquisition protocols in medical imaging. To optimize protocols for human observers, an MO must reliably interpret images containing quantum and anatomical noise under aliasing conditions. In this study of sampling parameters for simulated lung CT, the lesion-detection performance of human observers was compared with that of visual-search (VS) observers, a channelized nonprewhitening (CNPW) observer, and a channelized Hoteling (CH) observer. Scans of a mathematical torso phantom modeled single-slice parallel-hole CT with varying numbers of detector pixels and angular projections. Circular lung lesions had a fixed radius. Twodimensional FBP reconstructions were performed. A localization ROC study was conducted with the VS, CNPW and human observers, while the CH observer was applied in a location-known ROC study. Changing the sampling parameters had negligible effect on the CNPW and CH observers, whereas several VS observers demonstrated a sensitivity to sampling artifacts that was in agreement with how the humans performed.

  17. Impaired visual recognition of biological motion in schizophrenia.

    PubMed

    Kim, Jejoong; Doop, Mikisha L; Blake, Randolph; Park, Sohee

    2005-09-15

    Motion perception deficits have been suggested to be an important feature of schizophrenia but the behavioral consequences of such deficits are unknown. Biological motion refers to the movements generated by living beings. The human visual system rapidly and effortlessly detects and extracts socially relevant information from biological motion. A deficit in biological motion perception may have significant consequences for detecting and interpreting social information. Schizophrenia patients and matched healthy controls were tested on two visual tasks: recognition of human activity portrayed in point-light animations (biological motion task) and a perceptual control task involving detection of a grouped figure against the background noise (global-form task). Both tasks required detection of a global form against background noise but only the biological motion task required the extraction of motion-related information. Schizophrenia patients performed as well as the controls in the global-form task, but were significantly impaired on the biological motion task. In addition, deficits in biological motion perception correlated with impaired social functioning as measured by the Zigler social competence scale [Zigler, E., Levine, J. (1981). Premorbid competence in schizophrenia: what is being measured? Journal of Consulting and Clinical Psychology, 49, 96-105.]. The deficit in biological motion processing, which may be related to the previously documented deficit in global motion processing, could contribute to abnormal social functioning in schizophrenia.

  18. Spatial Mechanisms within the Dorsal Visual Pathway Contribute to the Configural Processing of Faces.

    PubMed

    Zachariou, Valentinos; Nikas, Christine V; Safiullah, Zaid N; Gotts, Stephen J; Ungerleider, Leslie G

    2017-08-01

    Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces. Published by Oxford University Press 2016.

  19. Visualization of prostatic nerves by polarization-sensitive optical coherence tomography

    PubMed Central

    Yoon, Yeoreum; Jeon, Seung Hwan; Park, Yong Hyun; Jang, Won Hyuk; Lee, Ji Youl; Kim, Ki Hean

    2016-01-01

    Preservation of prostatic nerves is critical to recovery of a man’s sexual potency after radical prostatectomy. A real-time imaging method of prostatic nerves will be helpful for nerve-sparing radical prostatectomy (NSRP). Polarization-sensitive optical coherence tomography (PS-OCT), which provides both structural and birefringent information of tissue, was applied for detection of prostatic nerves in both rat and human prostate specimens, ex vivo. PS-OCT imaging of rat prostate specimens visualized highly scattering and birefringent fibrous structures superficially, and these birefringent structures were confirmed to be nerves by histology or multiphoton microscopy (MPM). PS-OCT could easily distinguish these birefringent structures from surrounding other tissue compartments such as prostatic glands and fats. PS-OCT imaging of human prostatectomy specimens visualized two different birefringent structures, appearing fibrous and sheet-like. The fibrous ones were confirmed to be nerves by histology, and the sheet-like ones were considered to be fascias surrounding the human prostate. PS-OCT imaging of human prostatectomy specimens along the perimeter showed spatial variation in the amount of birefringent fibrous structures which was consistent with anatomy. These results demonstrate the feasibility of PS-OCT for detection of prostatic nerves, and this study will provide a basis for intraoperative use of PS-OCT. PMID:27699090

  20. Visual gravitational motion and the vestibular system in humans

    PubMed Central

    Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka

    2013-01-01

    The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity. PMID:24421761

  1. Visual gravitational motion and the vestibular system in humans.

    PubMed

    Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka

    2013-12-26

    The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.

  2. Urinary oxytocin positively correlates with performance in facial visual search in unmarried males, without specific reaction to infant face.

    PubMed

    Saito, Atsuko; Hamada, Hiroki; Kikusui, Takefumi; Mogi, Kazutaka; Nagasawa, Miho; Mitsui, Shohei; Higuchi, Takashi; Hasegawa, Toshikazu; Hiraki, Kazuo

    2014-01-01

    The neuropeptide oxytocin plays a central role in prosocial and parental behavior in non-human mammals as well as humans. It has been suggested that oxytocin may affect visual processing of infant faces and emotional reaction to infants. Healthy male volunteers (N = 13) were tested for their ability to detect infant or adult faces among adult or infant faces (facial visual search task). Urine samples were collected from all participants before the study to measure the concentration of oxytocin. Urinary oxytocin positively correlated with performance in the facial visual search task. However, task performance and its correlation with oxytocin concentration did not differ between infant faces and adult faces. Our data suggests that endogenous oxytocin is related to facial visual cognition, but does not promote infant-specific responses in unmarried men who are not fathers.

  3. Triggerfish uses chromaticity and lightness for object segregation

    PubMed Central

    2017-01-01

    Humans group components of visual patterns according to their colour, and perceive colours separately from shape. This property of human visual perception is the basis behind the Ishihara test for colour deficiency, where an observer is asked to detect a pattern made up of dots of similar colour with variable lightness against a background of dots made from different colour(s) and lightness. To find out if fish use colour for object segregation in a similar manner to humans, we used stimuli inspired by the Ishihara test. Triggerfish (Rhinecanthus aculeatus) were trained to detect a cross constructed from similarly coloured dots against various backgrounds. Fish detected this cross even when it was camouflaged using either achromatic or chromatic noise, but fish relied more on chromatic cues for shape segregation. It remains unknown whether fish may switch to rely primarily on achromatic cues in scenarios where target objects have higher achromatic contrast and lower chromatic contrast. Fish were also able to generalize between stimuli of different colours, suggesting that colour and shape are processed by fish independently. PMID:29308267

  4. Modeling Human Visual Perception for Target Detection in Military Simulations

    DTIC Science & Technology

    2009-06-01

    incorrectly, is a subject for future research. Possibly, one could exploit the Recognition-by-Components theory of Biederman (1987) and decompose the...Psychophysiscs, 55, 485-496. Biederman , I. (1987). Recognition-by-components: A theory of human image understand- ing. Psychological Review, 94, 115-147

  5. A cross-priming amplification assay coupled with vertical flow visualization for detection of Vibrio parahaemolyticus.

    PubMed

    Xu, Deshun; Wu, Xiaofang; Han, Jiankang; Chen, Liping; Ji, Lei; Yan, Wei; Shen, Yuehua

    2015-12-01

    Vibrio parahaemolyticus is a marine seafood-borne pathogen that causes gastrointestinal disorders in humans. In this study, we developed a cross-priming amplification (CPA) assay coupled with vertical flow (VF) visualization for rapid and sensitive detection of V. parahaemolyticus. This assay correctly detected all target strains (n = 13) and none of the non-target strains (n = 27). Small concentrations of V. parahaemolyticus (1.8 CFU/mL for pure cultures and 18 CFU/g for reconstituted samples) were detected within 1 h. CPA-VF can be applied at a large scale and can be used to detect V. parahaemolyticus strains rapidly in seafood and environmental samples, being especially useful in the field. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Establishing the fundamentals for an elephant early warning and monitoring system.

    PubMed

    Zeppelzauer, Matthias; Stoeger, Angela S

    2015-09-04

    The decline of habitat for elephants due to expanding human activity is a serious conservation problem. This has continuously escalated the human-elephant conflict in Africa and Asia. Elephants make extensive use of powerful infrasonic calls (rumbles) that travel distances of up to several kilometers. This makes elephants well-suited for acoustic monitoring because it enables detecting elephants even if they are out of sight. In sight, their distinct visual appearance makes them a good candidate for visual monitoring. We provide an integrated overview of our interdisciplinary project that established the scientific fundamentals for a future early warning and monitoring system for humans who regularly experience serious conflict with elephants. We first draw the big picture of an early warning and monitoring system, then review the developed solutions for automatic acoustic and visual detection, discuss specific challenges and present open future work necessary to build a robust and reliable early warning and monitoring system that is able to operate in situ. We present a method for the automated detection of elephant rumbles that is robust to the diverse noise sources present in situ. We evaluated the method on an extensive set of audio data recorded under natural field conditions. Results show that the proposed method outperforms existing approaches and accurately detects elephant rumbles. Our visual detection method shows that tracking elephants in wildlife videos (of different sizes and postures) is feasible and particularly robust at near distances. From our project results we draw a number of conclusions that are discussed and summarized. We clearly identified the most critical challenges and necessary improvements of the proposed detection methods and conclude that our findings have the potential to form the basis for a future automated early warning system for elephants. We discuss challenges that need to be solved and summarize open topics in the context of a future early warning and monitoring system. We conclude that a long-term evaluation of the presented methods in situ using real-time prototypes is the most important next step to transfer the developed methods into practical implementation.

  7. Non-destructive detection and quantification of blueberry bruising using near-infrared (NIR) hyperspectral reflectance imaging

    USDA-ARS?s Scientific Manuscript database

    Currently, blueberry bruising is evaluated by either human visual/tactile inspection or firmness measurement instruments. These methods are destructive and time-consuming. The goal of this paper was to develop a non-destructive approach for blueberry bruising detection and quantification. The spe...

  8. Sub-surface defects detection of by using active thermography and advanced image edge detection

    NASA Astrophysics Data System (ADS)

    Tse, Peter W.; Wang, Gaochao

    2017-05-01

    Active or pulsed thermography is a popular non-destructive testing (NDT) tool for inspecting the integrity and anomaly of industrial equipment. One of the recent research trends in using active thermography is to automate the process in detecting hidden defects. As of today, human effort has still been using to adjust the temperature intensity of the thermo camera in order to visually observe the difference in cooling rates caused by a normal target as compared to that by a sub-surface crack exists inside the target. To avoid the tedious human-visual inspection and minimize human induced error, this paper reports the design of an automatic method that is capable of detecting subsurface defects. The method used the technique of active thermography, edge detection in machine vision and smart algorithm. An infrared thermo-camera was used to capture a series of temporal pictures after slightly heating up the inspected target by flash lamps. Then the Canny edge detector was employed to automatically extract the defect related images from the captured pictures. The captured temporal pictures were preprocessed by a packet of Canny edge detector and then a smart algorithm was used to reconstruct the whole sequences of image signals. During the processes, noise and irrelevant backgrounds exist in the pictures were removed. Consequently, the contrast of the edges of defective areas had been highlighted. The designed automatic method was verified by real pipe specimens that contains sub-surface cracks. After applying such smart method, the edges of cracks can be revealed visually without the need of using manual adjustment on the setting of thermo-camera. With the help of this automatic method, the tedious process in manually adjusting the colour contract and the pixel intensity in order to reveal defects can be avoided.

  9. Multichannel optical mapping: investigation of depth information

    NASA Astrophysics Data System (ADS)

    Sase, Ichiro; Eda, Hideo; Seiyama, Akitoshi; Tanabe, Hiroki C.; Takatsuki, Akira; Yanagida, Toshio

    2001-06-01

    Near infrared (NIR) light has become a powerful tool for non-invasive imaging of human brain activity. Many systems have been developed to capture the changes in regional brain blood flow and hemoglobin oxygenation, which occur in the human cortex in response to neural activity. We have developed a multi-channel reflectance imaging system, which can be used as a `mapping device' and also as a `multi-channel spectrophotometer'. In the present study, we visualized changes in the hemodynamics of the human occipital region in multiple ways. (1) Stimulating left and right primary visual cortex independently by showing sector shaped checkerboards sequentially over the contralateral visual field, resulted in corresponding changes in the hemodynamics observed by `mapping' measurement. (2) Simultaneous measurement of functional-MRI and NIR (changes in total hemoglobin) during visual stimulation showed good spatial and temporal correlation with each other. (3) Placing multiple channels densely over the occipital region demonstrated spatial patterns more precisely, and depth information was also acquired by placing each pair of illumination and detection fibers at various distances. These results indicate that optical method can provide data for 3D analysis of human brain functions.

  10. Corollary discharge contributes to perceived eye location in monkeys

    PubMed Central

    Cavanaugh, James; FitzGibbon, Edmond J.; Wurtz, Robert H.

    2013-01-01

    Despite saccades changing the image on the retina several times per second, we still perceive a stable visual world. A possible mechanism underlying this stability is that an internal retinotopic map is updated with each saccade, with the location of objects being compared before and after the saccade. Psychophysical experiments have shown that humans derive such location information from a corollary discharge (CD) accompanying saccades. Such a CD has been identified in the monkey brain in a circuit extending from superior colliculus to frontal cortex. There is a missing piece, however. Perceptual localization is established only in humans and the CD circuit only in monkeys. We therefore extended measurement of perceptual localization to the monkey by adapting the target displacement detection task developed in humans. During saccades to targets, the target disappeared and then reappeared, sometimes at a different location. The monkeys reported the displacement direction. Detections of displacement were similar in monkeys and humans, but enhanced detection of displacement from blanking the target at the end of the saccade was observed only in humans, not in monkeys. Saccade amplitude varied across trials, but the monkey's estimates of target location did not follow that variation, indicating that eye location depended on an internal CD rather than external visual information. We conclude that monkeys use a CD to determine their new eye location after each saccade, just as humans do. PMID:23986562

  11. Corollary discharge contributes to perceived eye location in monkeys.

    PubMed

    Joiner, Wilsaan M; Cavanaugh, James; FitzGibbon, Edmond J; Wurtz, Robert H

    2013-11-01

    Despite saccades changing the image on the retina several times per second, we still perceive a stable visual world. A possible mechanism underlying this stability is that an internal retinotopic map is updated with each saccade, with the location of objects being compared before and after the saccade. Psychophysical experiments have shown that humans derive such location information from a corollary discharge (CD) accompanying saccades. Such a CD has been identified in the monkey brain in a circuit extending from superior colliculus to frontal cortex. There is a missing piece, however. Perceptual localization is established only in humans and the CD circuit only in monkeys. We therefore extended measurement of perceptual localization to the monkey by adapting the target displacement detection task developed in humans. During saccades to targets, the target disappeared and then reappeared, sometimes at a different location. The monkeys reported the displacement direction. Detections of displacement were similar in monkeys and humans, but enhanced detection of displacement from blanking the target at the end of the saccade was observed only in humans, not in monkeys. Saccade amplitude varied across trials, but the monkey's estimates of target location did not follow that variation, indicating that eye location depended on an internal CD rather than external visual information. We conclude that monkeys use a CD to determine their new eye location after each saccade, just as humans do.

  12. Vision and Foraging in Cormorants: More like Herons than Hawks?

    PubMed Central

    White, Craig R.; Day, Norman; Butler, Patrick J.; Martin, Graham R.

    2007-01-01

    Background Great cormorants (Phalacrocorax carbo L.) show the highest known foraging yield for a marine predator and they are often perceived to be in conflict with human economic interests. They are generally regarded as visually-guided, pursuit-dive foragers, so it would be expected that cormorants have excellent vision much like aerial predators, such as hawks which detect and pursue prey from a distance. Indeed cormorant eyes appear to show some specific adaptations to the amphibious life style. They are reported to have a highly pliable lens and powerful intraocular muscles which are thought to accommodate for the loss of corneal refractive power that accompanies immersion and ensures a well focussed image on the retina. However, nothing is known of the visual performance of these birds and how this might influence their prey capture technique. Methodology/Principal Findings We measured the aquatic visual acuity of great cormorants under a range of viewing conditions (illuminance, target contrast, viewing distance) and found it to be unexpectedly poor. Cormorant visual acuity under a range of viewing conditions is in fact comparable to unaided humans under water, and very inferior to that of aerial predators. We present a prey detectability model based upon the known acuity of cormorants at different illuminances, target contrasts and viewing distances. This shows that cormorants are able to detect individual prey only at close range (less than 1 m). Conclusions/Significance We conclude that cormorants are not the aquatic equivalent of hawks. Their efficient hunting involves the use of specialised foraging techniques which employ brief short-distance pursuit and/or rapid neck extension to capture prey that is visually detected or flushed only at short range. This technique appears to be driven proximately by the cormorant's limited visual capacities, and is analogous to the foraging techniques employed by herons. PMID:17653266

  13. Detection of fecal contamination on beef meat surfaces using handheld fluorescence imaging device (HFID)

    USDA-ARS?s Scientific Manuscript database

    Current meat inspection in slaughter plants, for food safety and quality attributes including potential fecal contamination, is conducted through by visual examination human inspectors. A handheld fluorescence-based imaging device (HFID) was developed to be an assistive tool for human inspectors by ...

  14. Infrared dim target detection based on visual attention

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Lv, Guofang; Xu, Lizhong

    2012-11-01

    Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. Based on human visual attention mechanisms, an automatic detection algorithm for infrared dim target is presented. After analyzing the characteristics of infrared dim target images, the method firstly designs Difference of Gaussians (DoG) filters to compute the saliency map. Then the salient regions where the potential targets exist in are extracted by searching through the saliency map with a control mechanism of winner-take-all (WTA) competition and inhibition-of-return (IOR). At last, these regions are identified by the characteristics of the dim IR targets, so the true targets are detected, and the spurious objects are rejected. The experiments are performed for some real-life IR images, and the results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.

  15. Image-based fall detection and classification of a user with a walking support system

    NASA Astrophysics Data System (ADS)

    Taghvaei, Sajjad; Kosuge, Kazuhiro

    2017-10-01

    The classification of visual human action is important in the development of systems that interact with humans. This study investigates an image-based classification of the human state while using a walking support system to improve the safety and dependability of these systems.We categorize the possible human behavior while utilizing a walker robot into eight states (i.e., sitting, standing, walking, and five falling types), and propose two different methods, namely, normal distribution and hidden Markov models (HMMs), to detect and recognize these states. The visual feature for the state classification is the centroid position of the upper body, which is extracted from the user's depth images. The first method shows that the centroid position follows a normal distribution while walking, which can be adopted to detect any non-walking state. The second method implements HMMs to detect and recognize these states. We then measure and compare the performance of both methods. The classification results are employed to control the motion of a passive-type walker (called "RT Walker") by activating its brakes in non-walking states. Thus, the system can be used for sit/stand support and fall prevention. The experiments are performed with four subjects, including an experienced physiotherapist. Results show that the algorithm can be adapted to the new user's motion pattern within 40 s, with a fall detection rate of 96.25% and state classification rate of 81.0%. The proposed method can be implemented to other abnormality detection/classification applications that employ depth image-sensing devices.

  16. The role of vision in auditory distance perception.

    PubMed

    Calcagno, Esteban R; Abregú, Ezequiel L; Eguía, Manuel C; Vergara, Ramiro

    2012-01-01

    In humans, multisensory interaction is an important strategy for improving the detection of stimuli of different nature and reducing the variability of response. It is known that the presence of visual information affects the auditory perception in the horizontal plane (azimuth), but there are few researches that study the influence of vision in the auditory distance perception. In general, the data obtained from these studies are contradictory and do not completely define the way in which visual cues affect the apparent distance of a sound source. Here psychophysical experiments on auditory distance perception in humans are performed, including and excluding visual cues. The results show that the apparent distance from the source is affected by the presence of visual information and that subjects can store in their memory a representation of the environment that later improves the perception of distance.

  17. Face Pareidolia in the Rhesus Monkey.

    PubMed

    Taubert, Jessica; Wardle, Susan G; Flessert, Molly; Leopold, David A; Ungerleider, Leslie G

    2017-08-21

    Face perception in humans and nonhuman primates is rapid and accurate [1-4]. In the human brain, a network of visual-processing regions is specialized for faces [5-7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face-processing system [8-10]: the rhesus monkey (Macaca mulatta). In a visual preference task [11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces [13]. Although the specialized response to faces observed in humans [1, 3, 5-7, 14] is often argued to be continuous across primates [4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly tuned face-detection mechanism that we share with other species. Published by Elsevier Ltd.

  18. Two-dimensional hidden semantic information model for target saliency detection and eyetracking identification

    NASA Astrophysics Data System (ADS)

    Wan, Weibing; Yuan, Lingfeng; Zhao, Qunfei; Fang, Tao

    2018-01-01

    Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets.

  19. Efficient human face detection in infancy.

    PubMed

    Jakobsen, Krisztina V; Umstead, Lindsey; Simpson, Elizabeth A

    2016-01-01

    Adults detect conspecific faces more efficiently than heterospecific faces; however, the development of this own-species bias (OSB) remains unexplored. We tested whether 6- and 11-month-olds exhibit OSB in their attention to human and animal faces in complex visual displays with high perceptual load (25 images competing for attention). Infants (n = 48) and adults (n = 43) passively viewed arrays containing a face among 24 non-face distractors while we measured their gaze with remote eye tracking. While OSB is typically not observed until about 9 months, we found that, already by 6 months, human faces were more likely to be detected, were detected more quickly (attention capture), and received longer looks (attention holding) than animal faces. These data suggest that 6-month-olds already exhibit OSB in face detection efficiency, consistent with perceptual attunement. This specialization may reflect the biological importance of detecting conspecific faces, a foundational ability for early social interactions. © 2015 Wiley Periodicals, Inc.

  20. Contrast-enhanced ultrasonography vs B-mode ultrasound for visualization of intima-media thickness and detection of plaques in human carotid arteries.

    PubMed

    Shah, Benoy N; Chahal, Navtej S; Kooner, Jaspal S; Senior, Roxy

    2017-05-01

    Carotid intima-media thickness (IMT) and plaque are recognized markers of increased risk for cerebrovascular events. Accurate visualization of the IMT and plaques is dependent upon image quality. Ultrasound contrast agents improve image quality during echocardiography-this study assessed whether contrast-enhanced ultrasound (CEUS) improves carotid IMT visualization and plaque detection in an asymptomatic population. Individuals free from known cardiovascular disease, enrolled in a community study, underwent B-mode and CEUS carotid imaging. Each carotid artery was divided into 10 segments (far and near walls of the proximal, mid and distal segments of the common carotid artery, the carotid bulb, and internal carotid artery). Visualization of the IMT complex and plaque assessments was made during both B-mode and CEUS imaging for all enrolled subjects, a total of 175 individuals (mean age 65±9 years). Visualization of the IMT was significantly improved during CEUS compared with B-mode imaging, in both near and far walls of the carotid arteries (% IMT visualization during B-mode vs CEUS imaging: 61% vs 94% and 66% vs 95% for right and left carotid arteries, respectively, P<.001 for both). Additionally, a greater number of plaques were detected during CEUS imaging compared with B-mode imaging (367 plaques vs 350 plaques, P=.02). Contrast-enhanced ultrasound improves visualization of the intima-media complex, in both near and far walls, of the common and internal carotid arteries and permits greater detection of carotid plaques. Further studies are required to determine whether there is incremental clinical and prognostic benefit related to superior plaque detection by CEUS. © 2017, Wiley Periodicals, Inc.

  1. Acoustic facilitation of object movement detection during self-motion

    PubMed Central

    Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.

    2011-01-01

    In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050

  2. Visual and sensitive fluorescent sensing for ultratrace mercury ions by perovskite quantum dots.

    PubMed

    Lu, Li-Qiang; Tan, Tian; Tian, Xi-Ke; Li, Yong; Deng, Pan

    2017-09-15

    Mercury ions sensing is an important issue for human health and environmental safety. A novel fluorescence nanosensor was designed for rapid visual detection of ultratrace mercury ions (Hg 2+ ) by using CH 3 NH 3 PbBr 3 perovskite quantum dots (QDs) based on the surface ion-exchange mechanism. The synthesized CH 3 NH 3 PbBr 3 QDs can emitt intense green fluorescence with high quantum yield of 50.28%, and can be applied for Hg 2+ sensing with the detection limit of 0.124 nM (24.87 ppt) in the range of 0 nM-100 nM. Furthermore, the interfering metal ions have no any influence on the fluorescence intensity of QDs, showing the perovskite QDs possess the high selectivity and sensitivity for Hg 2+ detection. The sensing mechanism of perovskite QDs for Hg 2+ is has also been investigated by XPS, EDX studies, showing Pb 2+ on the surface of perovskite QDs has been partially replaced by Hg 2+ . Spot plate test shows that the perovskite QDs can also be used for visual detection of Hg 2+ . Our research indicated the perovskite QDs are promising candidates for the visual fluorescence detection of environmental micropollutants. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Enhanced visual performance in obsessive compulsive personality disorder.

    PubMed

    Ansari, Zohreh; Fadardi, Javad Salehi

    2016-12-01

    Visual performance is considered as commanding modality in human perception. We tested whether Obsessive-compulsive personality disorder (OCPD) people do differently in visual performance tasks than people without OCPD. One hundred ten students of Ferdowsi University of Mashhad and non-student participants were tested by Structured Clinical Interview for DSM-IV Axis II Personality Disorders (SCID-II), among whom 18 (mean age = 29.55; SD = 5.26; 84% female) met the criteria for OCPD classification; controls were 20 persons (mean age = 27.85; SD = 5.26; female = 84%), who did not met the OCPD criteria. Both groups were tested on a modified Flicker task for two dimensions of visual performance (i.e., visual acuity: detecting the location of change, complexity, and size; and visual contrast sensitivity). The OCPD group had responded more accurately on pairs related to size, complexity, and contrast, but spent more time to detect a change on pairs related to complexity and contrast. The OCPD individuals seem to have more accurate visual performance than non-OCPD controls. The findings support the relationship between personality characteristics and visual performance within the framework of top-down processing model. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  4. Developing and evaluating a target-background similarity metric for camouflage detection.

    PubMed

    Lin, Chiuhsiang Joe; Chang, Chi-Chan; Liu, Bor-Shong

    2014-01-01

    Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results.

  5. Immunological multimetal deposition for rapid visualization of sweat fingerprints.

    PubMed

    He, Yayun; Xu, Linru; Zhu, Yu; Wei, Qianhui; Zhang, Meiqin; Su, Bin

    2014-11-10

    A simple method termed immunological multimetal deposition (iMMD) was developed for rapid visualization of sweat fingerprints with bare eyes, by combining the conventional MMD with the immunoassay technique. In this approach, antibody-conjugated gold nanoparticles (AuNPs) were used to specifically interact with the corresponding antigens in the fingerprint residue. The AuNPs serve as the nucleation sites for autometallographic deposition of silver particles from the silver staining solution, generating a dark ridge pattern for visual detection. Using fingerprints inked with human immunoglobulin G (hIgG), we obtained the optimal formulation of iMMD, which was then successfully applied to visualize sweat fingerprints through the detection of two secreted polypeptides, epidermal growth factor and lysozyme. In comparison with the conventional MMD, iMMD is faster and can provide additional information than just identification. Moreover, iMMD is facile and does not need expensive instruments. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Signal amplification of FISH for automated detection using image cytometry.

    PubMed

    Truong, K; Boenders, J; Maciorowski, Z; Vielh, P; Dutrillaux, B; Malfoy, B; Bourgeois, C A

    1997-05-01

    The purpose of this study was to improve the detection of FISH signals, in order that spot counting by a fully automated image cytometer be comparable to that obtained visually under the microscope. Two systems of spot scoring, visual and automated counting, were investigated in parallel on stimulated human lymphocytes with FISH using a biotinylated centromeric probe for chromosome 3. Signal characteristics were first analyzed on images recorded with a coupled charge device (CCD) camera. Number of spots per nucleus were scored visually on these recorded images versus automatically with a DISCOVERY image analyzer. Several fluochromes, amplification and pretreatments were tested. Our results for both visual and automated scoring show that the tyramide amplification system (TSA) gives the best amplification of signal if pepsin treatment is applied prior to FISH. Accuracy of the automated scoring, however, remained low (58% of nuclei containing two spots) compared to the visual scoring because of the high intranuclear variation between FISH spots.

  7. Learning and Discrimination of Audiovisual Events in Human Infants: The Hierarchical Relation between Intersensory Temporal Synchrony and Rhythmic Pattern Cues.

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2003-01-01

    Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…

  8. Teaching Ideas Notebook

    ERIC Educational Resources Information Center

    Journal of Aerospace Education, 1975

    1975-01-01

    Presents directions for constructing a hair hygrometer for detecting moisture in the air, outlines factors related to human space flight and reaction time, and explains the construction and use of a visual aid for map work. (GS)

  9. Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model

    PubMed Central

    Konstantinova, Jelizaveta; Xu, Guanghua; He, Bo; Aminzadeh, Vahid; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar

    2017-01-01

    Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by sliding indentation palpation is proposed, validated, and compared with force feedback involving human subjects. In an experimental tele-manipulation environment, a dynamically updated color map depicting the stiffness of probed soft tissue is presented via a graphical interface. The force feedback is provided, aided by a master haptic device. The haptic device uses data acquired from an F/T sensor attached to the end-effector of a tele-manipulated robot. Hard nodule detection performance is evaluated for 2 modes (force feedback and visual stiffness feedback) of stiffness feedback on an artificial organ containing buried stiff nodules. From this artificial organ, a virtual-environment tissue model is generated based on sliding indentation measurements. Employing this virtual-environment tissue model, we compare the performance of human participants in distinguishing differently sized hard nodules by force feedback and visual stiffness feedback. Results indicate that the proposed distributed visual representation of tissue stiffness can be used effectively for hard nodule identification. The representation can also be used as a sufficient substitute for force feedback in tissue palpation. PMID:28248996

  10. Evaluation of stiffness feedback for hard nodule identification on a phantom silicone model.

    PubMed

    Li, Min; Konstantinova, Jelizaveta; Xu, Guanghua; He, Bo; Aminzadeh, Vahid; Xie, Jun; Wurdemann, Helge; Althoefer, Kaspar

    2017-01-01

    Haptic information in robotic surgery can significantly improve clinical outcomes and help detect hard soft-tissue inclusions that indicate potential abnormalities. Visual representation of tissue stiffness information is a cost-effective technique. Meanwhile, direct force feedback, although considerably more expensive than visual representation, is an intuitive method of conveying information regarding tissue stiffness to surgeons. In this study, real-time visual stiffness feedback by sliding indentation palpation is proposed, validated, and compared with force feedback involving human subjects. In an experimental tele-manipulation environment, a dynamically updated color map depicting the stiffness of probed soft tissue is presented via a graphical interface. The force feedback is provided, aided by a master haptic device. The haptic device uses data acquired from an F/T sensor attached to the end-effector of a tele-manipulated robot. Hard nodule detection performance is evaluated for 2 modes (force feedback and visual stiffness feedback) of stiffness feedback on an artificial organ containing buried stiff nodules. From this artificial organ, a virtual-environment tissue model is generated based on sliding indentation measurements. Employing this virtual-environment tissue model, we compare the performance of human participants in distinguishing differently sized hard nodules by force feedback and visual stiffness feedback. Results indicate that the proposed distributed visual representation of tissue stiffness can be used effectively for hard nodule identification. The representation can also be used as a sufficient substitute for force feedback in tissue palpation.

  11. Category search speeds up face-selective fMRI responses in a non-hierarchical cortical face network.

    PubMed

    Jiang, Fang; Badler, Jeremy B; Righi, Giulia; Rossion, Bruno

    2015-05-01

    The human brain is extremely efficient at detecting faces in complex visual scenes, but the spatio-temporal dynamics of this remarkable ability, and how it is influenced by category-search, remain largely unknown. In the present study, human subjects were shown gradually-emerging images of faces or cars in visual scenes, while neural activity was recorded using functional magnetic resonance imaging (fMRI). Category search was manipulated by the instruction to indicate the presence of either a face or a car, in different blocks, as soon as an exemplar of the target category was detected in the visual scene. The category selectivity of most face-selective areas was enhanced when participants were instructed to report the presence of faces in gradually decreasing noise stimuli. Conversely, the same regions showed much less selectivity when participants were instructed instead to detect cars. When "face" was the target category, the fusiform face area (FFA) showed consistently earlier differentiation of face versus car stimuli than did the "occipital face area" (OFA). When "car" was the target category, only the FFA showed differentiation of face versus car stimuli. These observations provide further challenges for hierarchical models of cortical face processing and show that during gradual revealing of information, selective category-search may decrease the required amount of information, enhancing and speeding up category-selective responses in the human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Dynamics of cortico-subcortical cross-modal operations involved in audio-visual object detection in humans.

    PubMed

    Fort, Alexandra; Delpuech, Claude; Pernier, Jacques; Giard, Marie-Hélène

    2002-10-01

    Very recently, a number of neuroimaging studies in humans have begun to investigate the question of how the brain integrates information from different sensory modalities to form unified percepts. Already, intermodal neural processing appears to depend on the modalities of inputs or the nature (speech/non-speech) of information to be combined. Yet, the variety of paradigms, stimuli and technics used make it difficult to understand the relationships between the factors operating at the perceptual level and the underlying physiological processes. In a previous experiment, we used event-related potentials to describe the spatio-temporal organization of audio-visual interactions during a bimodal object recognition task. Here we examined the network of cross-modal interactions involved in simple detection of the same objects. The objects were defined either by unimodal auditory or visual features alone, or by the combination of the two features. As expected, subjects detected bimodal stimuli more rapidly than either unimodal stimuli. Combined analysis of potentials, scalp current densities and dipole modeling revealed several interaction patterns within the first 200 micro s post-stimulus: in occipito-parietal visual areas (45-85 micro s), in deep brain structures, possibly the superior colliculus (105-140 micro s), and in right temporo-frontal regions (170-185 micro s). These interactions differed from those found during object identification in sensory-specific areas and possibly in the superior colliculus, indicating that the neural operations governing multisensory integration depend crucially on the nature of the perceptual processes involved.

  13. Toward unsupervised outbreak detection through visual perception of new patterns

    PubMed Central

    Lévy, Pierre P; Valleron, Alain-Jacques

    2009-01-01

    Background Statistical algorithms are routinely used to detect outbreaks of well-defined syndromes, such as influenza-like illness. These methods cannot be applied to the detection of emerging diseases for which no preexisting information is available. This paper presents a method aimed at facilitating the detection of outbreaks, when there is no a priori knowledge of the clinical presentation of cases. Methods The method uses a visual representation of the symptoms and diseases coded during a patient consultation according to the International Classification of Primary Care 2nd version (ICPC-2). The surveillance data are transformed into color-coded cells, ranging from white to red, reflecting the increasing frequency of observed signs. They are placed in a graphic reference frame mimicking body anatomy. Simple visual observation of color-change patterns over time, concerning a single code or a combination of codes, enables detection in the setting of interest. Results The method is demonstrated through retrospective analyses of two data sets: description of the patients referred to the hospital by their general practitioners (GPs) participating in the French Sentinel Network and description of patients directly consulting at a hospital emergency department (HED). Informative image color-change alert patterns emerged in both cases: the health consequences of the August 2003 heat wave were visualized with GPs' data (but passed unnoticed with conventional surveillance systems), and the flu epidemics, which are routinely detected by standard statistical techniques, were recognized visually with HED data. Conclusion Using human visual pattern-recognition capacities to detect the onset of unexpected health events implies a convenient image representation of epidemiological surveillance and well-trained "epidemiology watchers". Once these two conditions are met, one could imagine that the epidemiology watchers could signal epidemiological alerts, based on "image walls" presenting the local, regional and/or national surveillance patterns, with specialized field epidemiologists assigned to validate the signals detected. PMID:19515246

  14. Food Catches the Eye but Not for Everyone: A BMI–Contingent Attentional Bias in Rapid Detection of Nutriments

    PubMed Central

    Nummenmaa, Lauri; Hietanen, Jari K.; Calvo, Manuel G.; Hyönä, Jukka

    2011-01-01

    An organism's survival depends crucially on its ability to detect and acquire nutriment. Attention circuits interact with cognitive and motivational systems to facilitate detection of salient sensory events in the environment. Here we show that the human attentional system is tuned to detect food targets among nonfood items. In two visual search experiments participants searched for discrepant food targets embedded in an array of nonfood distracters or vice versa. Detection times were faster when targets were food rather than nonfood items, and the detection advantage for food items showed a significant negative correlation with Body Mass Index (BMI). Also, eye tracking during searching within arrays of visually homogenous food and nonfood targets demonstrated that the BMI-contingent attentional bias was due to rapid capturing of the eyes by food items in individuals with low BMI. However, BMI was not associated with decision times after the discrepant food item was fixated. The results suggest that visual attention is biased towards foods, and that individual differences in energy consumption - as indexed by BMI - are associated with differential attentional effects related to foods. We speculate that such differences may constitute an important risk factor for gaining weight. PMID:21603657

  15. Visual detection following retinal damage: predictions of an inhomogeneous retino-cortical model

    NASA Astrophysics Data System (ADS)

    Arnow, Thomas L.; Geisler, Wilson S.

    1996-04-01

    A model of human visual detection performance has been developed, based on available anatomical and physiological data for the primate visual system. The inhomogeneous retino- cortical (IRC) model computes detection thresholds by comparing simulated neural responses to target patterns with responses to a uniform background of the same luminance. The model incorporates human ganglion cell sampling distributions; macaque monkey ganglion cell receptive field properties; macaque cortical cell contrast nonlinearities; and a optical decision rule based on ideal observer theory. Spatial receptive field properties of cortical neurons were not included. Two parameters were allowed to vary while minimizing the squared error between predicted and observed thresholds. One parameter was decision efficiency, the other was the relative strength of the ganglion-cell center and surround. The latter was only allowed to vary within a small range consistent with known physiology. Contrast sensitivity was measured for sinewave gratings as a function of spatial frequency, target size and eccentricity. Contrast sensitivity was also measured for an airplane target as a function of target size, with and without artificial scotomas. The results of these experiments, as well as contrast sensitivity data from the literature were compared to predictions of the IRC model. Predictions were reasonably good for grating and airplane targets.

  16. Learning-dependent plasticity with and without training in the human brain.

    PubMed

    Zhang, Jiaxiang; Kourtzi, Zoe

    2010-07-27

    Long-term experience through development and evolution and shorter-term training in adulthood have both been suggested to contribute to the optimization of visual functions that mediate our ability to interpret complex scenes. However, the brain plasticity mechanisms that mediate the detection of objects in cluttered scenes remain largely unknown. Here, we combine behavioral and functional MRI (fMRI) measurements to investigate the human-brain mechanisms that mediate our ability to learn statistical regularities and detect targets in clutter. We show two different routes to visual learning in clutter with discrete brain plasticity signatures. Specifically, opportunistic learning of regularities typical in natural contours (i.e., collinearity) can occur simply through frequent exposure, generalize across untrained stimulus features, and shape processing in occipitotemporal regions implicated in the representation of global forms. In contrast, learning to integrate discontinuities (i.e., elements orthogonal to contour paths) requires task-specific training (bootstrap-based learning), is stimulus-dependent, and enhances processing in intraparietal regions implicated in attention-gated learning. We propose that long-term experience with statistical regularities may facilitate opportunistic learning of collinear contours, whereas learning to integrate discontinuities entails bootstrap-based training for the detection of contours in clutter. These findings provide insights in understanding how long-term experience and short-term training interact to shape the optimization of visual recognition processes.

  17. TargetVue: Visual Analysis of Anomalous User Behaviors in Online Communication Systems.

    PubMed

    Cao, Nan; Shi, Conglei; Lin, Sabrina; Lu, Jie; Lin, Yu-Ru; Lin, Ching-Yung

    2016-01-01

    Users with anomalous behaviors in online communication systems (e.g. email and social medial platforms) are potential threats to society. Automated anomaly detection based on advanced machine learning techniques has been developed to combat this issue; challenges remain, though, due to the difficulty of obtaining proper ground truth for model training and evaluation. Therefore, substantial human judgment on the automated analysis results is often required to better adjust the performance of anomaly detection. Unfortunately, techniques that allow users to understand the analysis results more efficiently, to make a confident judgment about anomalies, and to explore data in their context, are still lacking. In this paper, we propose a novel visual analysis system, TargetVue, which detects anomalous users via an unsupervised learning model and visualizes the behaviors of suspicious users in behavior-rich context through novel visualization designs and multiple coordinated contextual views. Particularly, TargetVue incorporates three new ego-centric glyphs to visually summarize a user's behaviors which effectively present the user's communication activities, features, and social interactions. An efficient layout method is proposed to place these glyphs on a triangle grid, which captures similarities among users and facilitates comparisons of behaviors of different users. We demonstrate the power of TargetVue through its application in a social bot detection challenge using Twitter data, a case study based on email records, and an interview with expert users. Our evaluation shows that TargetVue is beneficial to the detection of users with anomalous communication behaviors.

  18. Visual Contrast Sensitivity Improvement by Right Frontal High-Beta Activity Is Mediated by Contrast Gain Mechanisms and Influenced by Fronto-Parietal White Matter Microstructure

    PubMed Central

    Quentin, Romain; Elkin Frankston, Seth; Vernet, Marine; Toba, Monica N.; Bartolomeo, Paolo; Chanes, Lorena; Valero-Cabré, Antoni

    2016-01-01

    Behavioral and electrophysiological studies in humans and non-human primates have correlated frontal high-beta activity with the orienting of endogenous attention and shown the ability of the latter function to modulate visual performance. We here combined rhythmic transcranial magnetic stimulation (TMS) and diffusion imaging to study the relation between frontal oscillatory activity and visual performance, and we associated these phenomena to a specific set of white matter pathways that in humans subtend attentional processes. High-beta rhythmic activity on the right frontal eye field (FEF) was induced with TMS and its causal effects on a contrast sensitivity function were recorded to explore its ability to improve visual detection performance across different stimulus contrast levels. Our results show that frequency-specific activity patterns engaged in the right FEF have the ability to induce a leftward shift of the psychometric function. This increase in visual performance across different levels of stimulus contrast is likely mediated by a contrast gain mechanism. Interestingly, microstructural measures of white matter connectivity suggest a strong implication of right fronto-parietal connectivity linking the FEF and the intraparietal sulcus in propagating high-beta rhythmic signals across brain networks and subtending top-down frontal influences on visual performance. PMID:25899709

  19. Two-out-of-two color matching based visual cryptography schemes.

    PubMed

    Machizaud, Jacques; Fournel, Thierry

    2012-09-24

    Visual cryptography which consists in sharing a secret message between transparencies has been extended to color prints. In this paper, we propose a new visual cryptography scheme based on color matching. The stacked printed media reveal a uniformly colored message decoded by the human visual system. In contrast with the previous color visual cryptography schemes, the proposed one enables to share images without pixel expansion and to detect a forgery as the color of the message is kept secret. In order to correctly print the colors on the media and to increase the security of the scheme, we use spectral models developed for color reproduction describing printed colors from an optical point of view.

  20. Fluorescence growth of self-polymerized fluorescence polydopamine for ratiometric visual detection of DA.

    PubMed

    Yu, Miao; Lu, Yang; Tan, Zhenjiang

    2017-06-01

    In this work, a novel and facile ratiometric fluorescence probe was prepared for the visual detection of dopamine (DA). In this detection system, red-emission CdTe@SiO 2 (r-QDs@SiO 2 ) was used as steady core of the probe and inverse microemulsion method was applied to synthesize uniform r-QDs@SiO 2 , this step could protect CdTe from contacting with human skin directly. Polydopamine (PDA) acted as response signal to detect DA, a very handy method which just combined polyethyleneimine (PEI) with DA together to synthesize PDA, this way for synthesis of PDA was much time-saving and non-toxic than any other methods. Differently from traditional analysis processes, the products of this experiment were also the analysis substances in final. Under optimum measurement conditions, the dual-emission ratiometric fluorescence probe was used for detections of DA in a concentration ranged from 10μM to 80μM with a detection limit of 0.12μM, with addition of DA the color of the probe changed from red to green watched by naked eyes. In addition, the developed probe was also used for detections of DA in human serum samples successfully. This study provides a simple, time-saving and non-toxic approach for detections of DA without the requirement of complex equipment. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Monkey Visual Short-Term Memory Directly Compared to Humans

    PubMed Central

    Elmore, L. Caitlin; Wright, Anthony A.

    2015-01-01

    Two adult rhesus monkeys were trained to detect which item in an array of memory items had changed using the same stimuli, viewing times, and delays as used with humans. Although the monkeys were extensively trained, they were less accurate than humans with the same array sizes (2, 4, & 6 items), with both stimulus types (colored squares, clip art), and showed calculated memory capacities of about one item (or less). Nevertheless, the memory results from both monkeys and humans for both stimulus types were well characterized by the inverse power-law of display size. This characterization provides a simple and straightforward summary of a fundamental process of visual short-term memory (how VSTM declines with memory load) that emphasizes species similarities based upon similar functional relationships. By more closely matching of monkey testing parameters to those of humans, the similar functional relationships strengthen the evidence suggesting similar processes underlying monkey and human VSTM. PMID:25706544

  2. Monkeys and Humans Share a Common Computation for Face/Voice Integration

    PubMed Central

    Chandrasekaran, Chandramouli; Lemus, Luis; Trubanova, Andrea; Gondan, Matthias; Ghazanfar, Asif A.

    2011-01-01

    Speech production involves the movement of the mouth and other regions of the face resulting in visual motion cues. These visual cues enhance intelligibility and detection of auditory speech. As such, face-to-face speech is fundamentally a multisensory phenomenon. If speech is fundamentally multisensory, it should be reflected in the evolution of vocal communication: similar behavioral effects should be observed in other primates. Old World monkeys share with humans vocal production biomechanics and communicate face-to-face with vocalizations. It is unknown, however, if they, too, combine faces and voices to enhance their perception of vocalizations. We show that they do: monkeys combine faces and voices in noisy environments to enhance their detection of vocalizations. Their behavior parallels that of humans performing an identical task. We explored what common computational mechanism(s) could explain the pattern of results we observed across species. Standard explanations or models such as the principle of inverse effectiveness and a “race” model failed to account for their behavior patterns. Conversely, a “superposition model”, positing the linear summation of activity patterns in response to visual and auditory components of vocalizations, served as a straightforward but powerful explanatory mechanism for the observed behaviors in both species. As such, it represents a putative homologous mechanism for integrating faces and voices across primates. PMID:21998576

  3. Biological Motion Preference in Humans at Birth: Role of Dynamic and Configural Properties

    ERIC Educational Resources Information Center

    Bardi, Lara; Regolin, Lucia; Simion, Francesca

    2011-01-01

    The present study addresses the hypothesis that detection of biological motion is an intrinsic capacity of the visual system guided by a non-species-specific predisposition for the pattern of vertebrate movement and investigates the role of global vs. local information in biological motion detection. Two-day-old babies exposed to a biological…

  4. Person detection, tracking and following using stereo camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofeng; Zhang, Lilian; Wang, Duo; Hu, Xiaoping

    2018-04-01

    Person detection, tracking and following is a key enabling technology for mobile robots in many human-robot interaction applications. In this article, we present a system which is composed of visual human detection, video tracking and following. The detection is based on YOLO(You only look once), which applies a single convolution neural network(CNN) to the full image, thus can predict bounding boxes and class probabilities directly in one evaluation. Then the bounding box provides initial person position in image to initialize and train the KCF(Kernelized Correlation Filter), which is a video tracker based on discriminative classifier. At last, by using a stereo 3D sparse reconstruction algorithm, not only the position of the person in the scene is determined, but also it can elegantly solve the problem of scale ambiguity in the video tracker. Extensive experiments are conducted to demonstrate the effectiveness and robustness of our human detection and tracking system.

  5. Dual Low-Rank Pursuit: Learning Salient Features for Saliency Detection.

    PubMed

    Lang, Congyan; Feng, Jiashi; Feng, Songhe; Wang, Jingdong; Yan, Shuicheng

    2016-06-01

    Saliency detection is an important procedure for machines to understand visual world as humans do. In this paper, we consider a specific saliency detection problem of predicting human eye fixations when they freely view natural images, and propose a novel dual low-rank pursuit (DLRP) method. DLRP learns saliency-aware feature transformations by utilizing available supervision information and constructs discriminative bases for effectively detecting human fixation points under the popular low-rank and sparsity-pursuit framework. Benefiting from the embedded high-level information in the supervised learning process, DLRP is able to predict fixations accurately without performing the expensive object segmentation as in the previous works. Comprehensive experiments clearly show the superiority of the proposed DLRP method over the established state-of-the-art methods. We also empirically demonstrate that DLRP provides stronger generalization performance across different data sets and inherits the advantages of both the bottom-up- and top-down-based saliency detection methods.

  6. Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition

    PubMed Central

    Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua

    2015-01-01

    Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270

  7. A qualitative review for wireless health monitoring system

    NASA Astrophysics Data System (ADS)

    Arshad, Atika; Fadzil Ismail, Ahmad; Khan, Sheroz; Zahirul Alam, A. H. M.; Tasnim, Rumana; Samnan Haider, Syed; Shobaki, Mohammed M.; Shahid, Zeeshan

    2013-12-01

    A proliferating interest has been being observed over the past years in accurate wireless system development in order to monitor incessant human activities in health care centres. Furthermore because of the swelling number of elderly population and the inadequate number of competent staffs for nursing homes there is a big market petition for health care monitoring system. In order to detect human researchers developed different methods namely which include Field Identification technique, Visual Sensor Network, radar detection, e-mobile techniques and so on. An all-encompassing overview of the non-wired human detection application advancement is presented in this paper. Inductive links are used for human detection application while wiring an electronic system has become impractical in recent times. Keeping in mind the shortcomings, an Inductive Intelligent Sensor (IIS) has been proposed as a novel human monitoring system for future implementation. The proposed sensor works towards exploring the signature signals of human body movement and size. This proposed sensor is fundamentally based on inductive loop that senses the presence and a passing human resulting an inductive change.

  8. Unethical randomised controlled trial of cervical screening in India: US Freedom of Information Act disclosures.

    PubMed

    Suba, Eric J; Ortega, Robert E; Mutch, David G

    2017-01-01

    A randomised controlled trial conducted in Mumbai, India, compared invasive cervical cancer rates among women offered cervical screening with invasive cervical cancer rates among women offered no-screening. The US Office for Human Research Protections determined the Mumbai trial was unethical because informed consent was not obtained from trial participants. Reportedly, cervical screening in the Mumbai trial reduced invasive cervical cancer mortality rates, but not invasive cervical cancer incidence rates. Documents obtained through the US Freedom of Information Act disclose that the US National Cancer Institute funded the Mumbai trial from 1997 to 2015 to study 'visual inspection/downstaging' tests. However, 'visual inspection/downstaging' tests had been judged unsatisfactory for cancer control before the Mumbai trial began. 'Visual inspection/downstaging' tests failed to reduce invasive cervical cancer incidence rates in Mumbai because 'visual inspection/downstaging' tests, by design, failed to detect preinvasive cervical lesions. None of the 151 538 Mumbai trial participants, in either the intervention or control arms, received cervical screening tests that detected preinvasive cervical lesions. Because of missing/discrepant clinical staging data, it is uncertain whether 'visual inspection/downstaging' tests actually reduced invasive cervical cancer mortality rates in Mumbai. Documents obtained through the US Freedom of Information Act disclose that US National Cancer Institute leaders avoided accountability by making false and misleading statements to Congressional oversight staff. Our findings contradict assurances given to President Barack Obama that regulations pertaining to global health research supported by the US government adequately protect human participants from unethical treatment. US National Cancer Institute leaders should develop policies to compensate victims of unethical global health research. All surviving Mumbai trial participants should finally receive cervical screening tests that detect preinvasive cervical lesions.

  9. Nanometer-Sized Diamond Particle as a Probe for Biolabeling

    PubMed Central

    Chao, Jui-I.; Perevedentseva, Elena; Chung, Pei-Hua; Liu, Kuang-Kai; Cheng, Chih-Yuan; Chang, Chia-Ching; Cheng, Chia-Liang

    2007-01-01

    A novel method is proposed using nanometer-sized diamond particles as detection probes for biolabeling. The advantages of nanodiamond's unique properties were demonstrated in its biocompatibility, nontoxicity, easily detected Raman signal, and intrinsic fluorescence from its natural defects without complicated pretreatments. Carboxylated nanodiamond's (cND's) penetration ability, noncytotoxicity, and visualization of cND-cell interactions are demonstrated on A549 human lung epithelial cells. Protein-targeted cell interaction visualization was demonstrated with cND-lysozyme complex interaction with bacteria Escherichia coli. It is shown that the developed biomolecule-cND complex preserves the original functions of the test protein. The easily detected natural fluorescent and Raman intrinsic signals, penetration ability, and low cytotoxicity of cNDs render them promising agents in multiple medical applications. PMID:17513352

  10. When the Wheels Touch Earth and the Flight is Through, Pilots Find One Eye is Better Than Two?

    NASA Technical Reports Server (NTRS)

    Valimont, Brian; Wise, John A.; Nichols, Troy; Best, Carl; Suddreth, John; Cupero, Frank

    2009-01-01

    This study investigated the impact of near to eye displays on both operational and visual performance by employing a human-in-the-loop simulation of straight-in ILS approaches while using a near to eye (NTE) display. The approaches were flown in simulated visual and instrument conditions while using either a biocular NTE or a monocular NTE display on either the dominant or non dominant eye. The pilot s flight performance, visual acuity, and ability to detect unsafe conditions on the runway were tested.

  11. Good vibrations: tactile feedback in support of attention allocation and human-automation coordination in event-driven domains.

    PubMed

    Sklar, A E; Sarter, N B

    1999-12-01

    Observed breakdowns in human-machine communication can be explained, in part, by the nature of current automation feedback, which relies heavily on focal visual attention. Such feedback is not well suited for capturing attention in case of unexpected changes and events or for supporting the parallel processing of large amounts of data in complex domains. As suggested by multiple-resource theory, one possible solution to this problem is to distribute information across various sensory modalities. A simulator study was conducted to compare the effectiveness of visual, tactile, and redundant visual and tactile cues for indicating unexpected changes in the status of an automated cockpit system. Both tactile conditions resulted in higher detection rates for, and faster response times to, uncommanded mode transitions. Tactile feedback did not interfere with, nor was its effectiveness affected by, the performance of concurrent visual tasks. The observed improvement in task-sharing performance indicates that the introduction of tactile feedback is a promising avenue toward better supporting human-machine communication in event-driven, information-rich domains.

  12. Early detection and visualization of human adenovirus serotype 5-viral vectors carrying foot-and-mouth disease virus or luciferase transgenes in cell lines and bovine tissues

    USDA-ARS?s Scientific Manuscript database

    Recombinant replication-defective human adenovirus type 5 (Ad5) vaccines containing capsid-coding regions from foot-and-mouth disease virus (FMDV) have been demonstrated to induce effective immune responses and provide homologous protective immunity against FMDV in cattle. However, basic mechanisms ...

  13. Bio-inspired display of polarization information using selected visual cues

    NASA Astrophysics Data System (ADS)

    Yemelyanov, Konstantin M.; Lin, Shih-Schon; Luis, William Q.; Pugh, Edward N., Jr.; Engheta, Nader

    2003-12-01

    For imaging systems the polarization of electromagnetic waves carries much potentially useful information about such features of the world as the surface shape, material contents, local curvature of objects, as well as about the relative locations of the source, object and imaging system. The imaging system of the human eye however, is "polarization-blind", and cannot utilize the polarization of light without the aid of an artificial, polarization-sensitive instrument. Therefore, polarization information captured by a man-made polarimetric imaging system must be displayed to a human observer in the form of visual cues that are naturally processed by the human visual system, while essentially preserving the other important non-polarization information (such as spectral and intensity information) in an image. In other words, some forms of sensory substitution are needed for representing polarization "signals" without affecting other visual information such as color and brightness. We are investigating several bio-inspired representational methodologies for mapping polarization information into visual cues readily perceived by the human visual system, and determining which mappings are most suitable for specific applications such as object detection, navigation, sensing, scene classifications, and surface deformation. The visual cues and strategies we are exploring are the use of coherently moving dots superimposed on image to represent various range of polarization signals, overlaying textures with spatial and/or temporal signatures to segregate regions of image with differing polarization, modulating luminance and/or color contrast of scenes in terms of certain aspects of polarization values, and fusing polarization images into intensity-only images. In this talk, we will present samples of our findings in this area.

  14. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    NASA Astrophysics Data System (ADS)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and fixation duration) in an active search task. In addition, our method to improve single-trial detection performance in this adverse scenario is an important step in making brain-computer interfacing technology available for human-computer interaction applications.

  15. Rapid Processing of a Global Feature in the ON Visual Pathways of Behaving Monkeys.

    PubMed

    Huang, Jun; Yang, Yan; Zhou, Ke; Zhao, Xudong; Zhou, Quan; Zhu, Hong; Yang, Yingshan; Zhang, Chunming; Zhou, Yifeng; Zhou, Wu

    2017-01-01

    Visual objects are recognized by their features. Whereas, some features are based on simple components (i.e., local features, such as orientation of line segments), some features are based on the whole object (i.e., global features, such as an object having a hole in it). Over the past five decades, behavioral, physiological, anatomical, and computational studies have established a general model of vision, which starts from extracting local features in the lower visual pathways followed by a feature integration process that extracts global features in the higher visual pathways. This local-to-global model is successful in providing a unified account for a vast sets of perception experiments, but it fails to account for a set of experiments showing human visual systems' superior sensitivity to global features. Understanding the neural mechanisms underlying the "global-first" process will offer critical insights into new models of vision. The goal of the present study was to establish a non-human primate model of rapid processing of global features for elucidating the neural mechanisms underlying differential processing of global and local features. Monkeys were trained to make a saccade to a target in the black background, which was different from the distractors (white circle) in color (e.g., red circle target), local features (e.g., white square target), a global feature (e.g., white ring with a hole target) or their combinations (e.g., red square target). Contrary to the predictions of the prevailing local-to-global model, we found that (1) detecting a distinction or a change in the global feature was faster than detecting a distinction or a change in color or local features; (2) detecting a distinction in color was facilitated by a distinction in the global feature, but not in the local features; and (3) detecting the hole was interfered by the local features of the hole (e.g., white ring with a squared hole). These results suggest that monkey ON visual systems have a subsystem that is more sensitive to distinctions in the global feature than local features. They also provide the behavioral constraints for identifying the underlying neural substrates.

  16. Detection and identification of human targets in radar data

    NASA Astrophysics Data System (ADS)

    Gürbüz, Sevgi Z.; Melvin, William L.; Williams, Douglas B.

    2007-04-01

    Radar offers unique advantages over other sensors, such as visual or seismic sensors, for human target detection. Many situations, especially military applications, prevent the placement of video cameras or implantment seismic sensors in the area being observed, because of security or other threats. However, radar can operate far away from potential targets, and functions during daytime as well as nighttime, in virtually all weather conditions. In this paper, we examine the problem of human target detection and identification using single-channel, airborne, synthetic aperture radar (SAR). Human targets are differentiated from other detected slow-moving targets by analyzing the spectrogram of each potential target. Human spectrograms are unique, and can be used not just to identify targets as human, but also to determine features about the human target being observed, such as size, gender, action, and speed. A 12-point human model, together with kinematic equations of motion for each body part, is used to calculate the expected target return and spectrogram. A MATLAB simulation environment is developed including ground clutter, human and non-human targets for the testing of spectrogram-based detection and identification algorithms. Simulations show that spectrograms have some ability to detect and identify human targets in low noise. An example gender discrimination system correctly detected 83.97% of males and 91.11% of females. The problems and limitations of spectrogram-based methods in high clutter environments are discussed. The SNR loss inherent to spectrogram-based methods is quantified. An alternate detection and identification method that will be used as a basis for future work is proposed.

  17. High sensitivity, loop-mediated isothermal amplification combined with colorimetric gold-nanoparticle probes for visual detection of high risk human papillomavirus genotypes 16 and 18.

    PubMed

    Kumvongpin, Ratchanida; Jearanaikool, Patcharee; Wilailuckana, Chotechana; Sae-Ung, Nattaya; Prasongdee, Prinya; Daduang, Sakda; Wongsena, Metee; Boonsiri, Patcharee; Kiatpathomchai, Wansika; Swangvaree, Sukumarn Sanersak; Sandee, Alisa; Daduang, Jureerut

    2016-08-01

    High-risk human papillomavirus (HR-HPV) causes cervical cancer. HPV16 and HPV18 are the most prevalent strains of the virus reported in women worldwide. Loop-mediated isothermal amplification (LAMP) is an alternative method for DNA detection under isothermal conditions. However, it results in a turbid amplified product which is not easily detected by the naked eye. This study aimed to develop an improved technique by using gold nanoparticles (AuNPs) attached to a single-stranded DNA probe for the detection of HPV16 and HPV18. Detection of the LAMP product by AuNP color change was compared with detection by visual turbidity. The optimal conditions for this new LAMP-AuNP assay were an incubation time of 20min and a temperature of 65°C. After LAMP amplification was complete, its products were hybridized with the AuNP probe for 5min and then detected by the addition of magnesium salt. The color changed from red to blue as a result of aggregation of the AuNP probe under high ionic strength conditions produced by the addition of the salt. The sensitivity of the LAMP-AuNP assay was greater than the LAMP turbidity assay by up to 10-fold for both HPV genotypes. The LAMP-AuNP assay showed higher sensitivity and ease of visualization than did the LAMP turbidity for the detection of HPV16 and HPV18. Additionally, AuNP-HPV16 and AuNP-HPV18 probes were stable for over 1year. The combination of LAMP and the AuNP-probe colorimetric assay offers a simple, rapid and highly sensitive alternative diagnostic tool for the detection of HPV16 and HPV18 in district hospitals or field studies. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Peptide-activated gold nanoparticles for selective visual sensing of virus

    NASA Astrophysics Data System (ADS)

    Sajjanar, Basavaraj; Kakodia, Bhuvna; Bisht, Deepika; Saxena, Shikha; Singh, Arvind Kumar; Joshi, Vinay; Tiwari, Ashok Kumar; Kumar, Satish

    2015-05-01

    In this study, we report peptide-gold nanoparticles (AuNP)-based visual sensor for viruses. Citrate-stabilized AuNP (20 ± 1.9 nm) were functionalized with strong sulfur-gold interface using cysteinylated virus-specific peptide. Peptide-Cys-AuNP formed complexes with the viruses which made them to aggregate. The aggregation can be observed with naked eye and also with UV-Vis spectrophotometer as a color change from bright red to purple. The test allows for fast and selective detection of specific viruses. Spectroscopic measurements showed high linear correlation ( R 2 = 0.995) between the changes in optical density ratio (OD610/OD520) with the different concentrations of virus. The new method was compared with the hemagglutinating (HA) test for Newcastle disease virus (NDV). The results indicated that peptide-Cys-AuNP was more sensitive and can visually detect minimum number of virus particles present in the biological samples. The limit of detection for the NDV was 0.125 HA units of the virus. The method allows for selective detection and quantification of the NDV, and requires no isolation of viral RNA and PCR experiments. This strategy may be utilized for detection of other important human and animal viral pathogens.

  19. Real-time detection and discrimination of visual perception using electrocorticographic signals

    NASA Astrophysics Data System (ADS)

    Kapeller, C.; Ogawa, H.; Schalk, G.; Kunii, N.; Coon, W. G.; Scharinger, J.; Guger, C.; Kamada, K.

    2018-06-01

    Objective. Several neuroimaging studies have demonstrated that the ventral temporal cortex contains specialized regions that process visual stimuli. This study investigated the spatial and temporal dynamics of electrocorticographic (ECoG) responses to different types and colors of visual stimulation that were presented to four human participants, and demonstrated a real-time decoder that detects and discriminates responses to untrained natural images. Approach. ECoG signals from the participants were recorded while they were shown colored and greyscale versions of seven types of visual stimuli (images of faces, objects, bodies, line drawings, digits, and kanji and hiragana characters), resulting in 14 classes for discrimination (experiment I). Additionally, a real-time system asynchronously classified ECoG responses to faces, kanji and black screens presented via a monitor (experiment II), or to natural scenes (i.e. the face of an experimenter, natural images of faces and kanji, and a mirror) (experiment III). Outcome measures in all experiments included the discrimination performance across types based on broadband γ activity. Main results. Experiment I demonstrated an offline classification accuracy of 72.9% when discriminating among the seven types (without color separation). Further discrimination of grey versus colored images reached an accuracy of 67.1%. Discriminating all colors and types (14 classes) yielded an accuracy of 52.1%. In experiment II and III, the real-time decoder correctly detected 73.7% responses to face, kanji and black computer stimuli and 74.8% responses to presented natural scenes. Significance. Seven different types and their color information (either grey or color) could be detected and discriminated using broadband γ activity. Discrimination performance maximized for combined spatial-temporal information. The discrimination of stimulus color information provided the first ECoG-based evidence for color-related population-level cortical broadband γ responses in humans. Stimulus categories can be detected by their ECoG responses in real time within 500 ms with respect to stimulus onset.

  20. Quality metrics for sensor images

    NASA Technical Reports Server (NTRS)

    Ahumada, AL

    1993-01-01

    Methods are needed for evaluating the quality of augmented visual displays (AVID). Computational quality metrics will help summarize, interpolate, and extrapolate the results of human performance tests with displays. The FLM Vision group at NASA Ames has been developing computational models of visual processing and using them to develop computational metrics for similar problems. For example, display modeling systems use metrics for comparing proposed displays, halftoning optimizing methods use metrics to evaluate the difference between the halftone and the original, and image compression methods minimize the predicted visibility of compression artifacts. The visual discrimination models take as input two arbitrary images A and B and compute an estimate of the probability that a human observer will report that A is different from B. If A is an image that one desires to display and B is the actual displayed image, such an estimate can be regarded as an image quality metric reflecting how well B approximates A. There are additional complexities associated with the problem of evaluating the quality of radar and IR enhanced displays for AVID tasks. One important problem is the question of whether intruding obstacles are detectable in such displays. Although the discrimination model can handle detection situations by making B the original image A plus the intrusion, this detection model makes the inappropriate assumption that the observer knows where the intrusion will be. Effects of signal uncertainty need to be added to our models. A pilot needs to make decisions rapidly. The models need to predict not just the probability of a correct decision, but the probability of a correct decision by the time the decision needs to be made. That is, the models need to predict latency as well as accuracy. Luce and Green have generated models for auditory detection latencies. Similar models are needed for visual detection. Most image quality models are designed for static imagery. Watson has been developing a general spatial-temporal vision model to optimize video compression techniques. These models need to be adapted and calibrated for AVID applications.

  1. Fast hierarchical knowledge-based approach for human face detection in color images

    NASA Astrophysics Data System (ADS)

    Jiang, Jun; Gong, Jie; Zhang, Guilin; Hu, Ruolan

    2001-09-01

    This paper presents a fast hierarchical knowledge-based approach for automatically detecting multi-scale upright faces in still color images. The approach consists of three levels. At the highest level, skin-like regions are determinated by skin model, which is based on the color attributes hue and saturation in HSV color space, as well color attributes red and green in normalized color space. In level 2, a new eye model is devised to select human face candidates in segmented skin-like regions. An important feature of the eye model is that it is independent of the scale of human face. So it is possible for finding human faces in different scale with scanning image only once, and it leads to reduction the computation time of face detection greatly. In level 3, a human face mosaic image model, which is consistent with physical structure features of human face well, is applied to judge whether there are face detects in human face candidate regions. This model includes edge and gray rules. Experiment results show that the approach has high robustness and fast speed. It has wide application perspective at human-computer interactions and visual telephone etc.

  2. Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury

    NASA Astrophysics Data System (ADS)

    Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.

    2008-02-01

    Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.

  3. Early Detection of Clinically Significant Prostate Cancer Using Ultrasonic Acoustic Radiation Force Impulse (ARFI) Imaging

    DTIC Science & Technology

    2017-10-01

    Toolkit for rapid 3D visualization and image volume interpretation, followed by automated transducer positioning in a user-selected image plane for... Toolkit (IGSTK) to enable rapid 3D visualization and image volume interpretation followed by automated transducer positioning in the user-selected... careers in science, technology, and the humanities. What do you plan to do during the next reporting period to accomplish the goals? If this

  4. Internal representations for face detection: an application of noise-based image classification to BOLD responses.

    PubMed

    Nestor, Adrian; Vettel, Jean M; Tarr, Michael J

    2013-11-01

    What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.

  5. A Rapid and Specific Assay for the Detection of MERS-CoV

    PubMed Central

    Huang, Pei; Wang, Hualei; Cao, Zengguo; Jin, Hongli; Chi, Hang; Zhao, Jincun; Yu, Beibei; Yan, Feihu; Hu, Xingxing; Wu, Fangfang; Jiao, Cuicui; Hou, Pengfei; Xu, Shengnan; Zhao, Yongkun; Feng, Na; Wang, Jianzhong; Sun, Weiyang; Wang, Tiecheng; Gao, Yuwei; Yang, Songtao; Xia, Xianzhu

    2018-01-01

    Middle East respiratory syndrome coronavirus (MERS-CoV) is a novel human coronavirus that can cause human respiratory disease. The development of a detection method for this virus that can lead to rapid and accurate diagnosis would be significant. In this study, we established a nucleic acid visualization technique that combines the reverse transcription loop-mediated isothermal amplification technique and a vertical flow visualization strip (RT-LAMP-VF) to detect the N gene of MERS-CoV. The RT-LAMP-VF assay was performed in a constant temperature water bath for 30 min, and the result was visible by the naked eye within 5 min. The RT-LAMP-VF assay was capable of detecting 2 × 101 copies/μl of synthesized RNA transcript and 1 × 101 copies/μl of MERS-CoV RNA. The method exhibits no cross-reactivities with multiple CoVs including SARS-related (SARSr)-CoV, HKU4, HKU1, OC43 and 229E, and thus exhibits high specificity. Compared to the real-time RT-PCR (rRT-PCR) method recommended by the World Health Organization (WHO), the RT-LAMP-VF assay is easy to handle, does not require expensive equipment and can rapidly complete detection within 35 min. PMID:29896174

  6. Visualizing histopathologic deep learning classification and anomaly detection using nonlinear feature space dimensionality reduction.

    PubMed

    Faust, Kevin; Xie, Quin; Han, Dominick; Goyle, Kartikay; Volynskaya, Zoya; Djuric, Ugljesa; Diamandis, Phedias

    2018-05-16

    There is growing interest in utilizing artificial intelligence, and particularly deep learning, for computer vision in histopathology. While accumulating studies highlight expert-level performance of convolutional neural networks (CNNs) on focused classification tasks, most studies rely on probability distribution scores with empirically defined cutoff values based on post-hoc analysis. More generalizable tools that allow humans to visualize histology-based deep learning inferences and decision making are scarce. Here, we leverage t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce dimensionality and depict how CNNs organize histomorphologic information. Unique to our workflow, we develop a quantitative and transparent approach to visualizing classification decisions prior to softmax compression. By discretizing the relationships between classes on the t-SNE plot, we show we can super-impose randomly sampled regions of test images and use their distribution to render statistically-driven classifications. Therefore, in addition to providing intuitive outputs for human review, this visual approach can carry out automated and objective multi-class classifications similar to more traditional and less-transparent categorical probability distribution scores. Importantly, this novel classification approach is driven by a priori statistically defined cutoffs. It therefore serves as a generalizable classification and anomaly detection tool less reliant on post-hoc tuning. Routine incorporation of this convenient approach for quantitative visualization and error reduction in histopathology aims to accelerate early adoption of CNNs into generalized real-world applications where unanticipated and previously untrained classes are often encountered.

  7. Interoceptive signals impact visual processing: Cardiac modulation of visual body perception.

    PubMed

    Ronchi, Roberta; Bernasconi, Fosco; Pfeiffer, Christian; Bello-Ruiz, Javier; Kaliuzhna, Mariia; Blanke, Olaf

    2017-09-01

    Multisensory perception research has largely focused on exteroceptive signals, but recent evidence has revealed the integration of interoceptive signals with exteroceptive information. Such research revealed that heartbeat signals affect sensory (e.g., visual) processing: however, it is unknown how they impact the perception of body images. Here we linked our participants' heartbeat to visual stimuli and investigated the spatio-temporal brain dynamics of cardio-visual stimulation on the processing of human body images. We recorded visual evoked potentials with 64-channel electroencephalography while showing a body or a scrambled-body (control) that appeared at the frequency of the on-line recorded participants' heartbeat or not (not-synchronous, control). Extending earlier studies, we found a body-independent effect, with cardiac signals enhancing visual processing during two time periods (77-130 ms and 145-246 ms). Within the second (later) time-window we detected a second effect characterised by enhanced activity in parietal, temporo-occipital, inferior frontal, and right basal ganglia-insula regions, but only when non-scrambled body images were flashed synchronously with the heartbeat (208-224 ms). In conclusion, our results highlight the role of interoceptive information for the visual processing of human body pictures within a network integrating cardio-visual signals of relevance for perceptual and cognitive aspects of visual body processing. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Early auditory change detection implicitly facilitated by ignored concurrent visual change during a Braille reading task.

    PubMed

    Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya

    2013-09-01

    Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can implicitly strengthen automatic change detection from an early stage in a cross-sensory manner, at least in the vision to audition direction.

  9. Fluorescent probes sensitive to changes in the cholesterol-to-phospholipids molar ratio in human platelet membranes during atherosclerosis

    NASA Astrophysics Data System (ADS)

    Posokhov, Yevgen

    2016-09-01

    Environment-sensitive fluorescent probes were used for the spectroscopic visualization of pathological changes in human platelet membranes during cerebral atherosclerosis. It has been estimated that the ratiometric probes 2-(2‧-hydroxyphenyl)-5-phenyl-1,3,4-oxadiazole and 2-phenyl-phenanthr[9,10]oxazole can detect changes in the cholesterol-to-phospholipids molar ratio in human platelet membranes during the disease.

  10. Developing and Evaluating a Target-Background Similarity Metric for Camouflage Detection

    PubMed Central

    Lin, Chiuhsiang Joe; Chang, Chi-Chan; Liu, Bor-Shong

    2014-01-01

    Background Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. Methodology In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. Significance The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results. PMID:24498310

  11. Performance characteristics of a visual-search human-model observer with sparse PET image data

    NASA Astrophysics Data System (ADS)

    Gifford, Howard C.

    2012-02-01

    As predictors of human performance in detection-localization tasks, statistical model observers can have problems with tasks that are primarily limited by target contrast or structural noise. Model observers with a visual-search (VS) framework may provide a more reliable alternative. This framework provides for an initial holistic search that identifies suspicious locations for analysis by a statistical observer. A basic VS observer for emission tomography focuses on hot "blobs" in an image and uses a channelized nonprewhitening (CNPW) observer for analysis. In [1], we investigated this model for a contrast-limited task with SPECT images; herein, a statisticalnoise limited task involving PET images is considered. An LROC study used 2D image slices with liver, lung and soft-tissue tumors. Human and model observers read the images in coronal, sagittal and transverse display formats. The study thus measured the detectability of tumors in a given organ as a function of display format. The model observers were applied under several task variants that tested their response to structural noise both at the organ boundaries alone and over the organs as a whole. As measured by correlation with the human data, the VS observer outperformed the CNPW scanning observer.

  12. Intact mass detection, interpretation, and visualization to automate Top-Down proteomics on a large scale

    PubMed Central

    Durbin, Kenneth R.; Tran, John C.; Zamdborg, Leonid; Sweet, Steve M. M.; Catherman, Adam D.; Lee, Ji Eun; Li, Mingxi; Kellie, John F.; Kelleher, Neil L.

    2011-01-01

    Applying high-throughput Top-Down MS to an entire proteome requires a yet-to-be-established model for data processing. Since Top-Down is becoming possible on a large scale, we report our latest software pipeline dedicated to capturing the full value of intact protein data in automated fashion. For intact mass detection, we combine algorithms for processing MS1 data from both isotopically resolved (FT) and charge-state resolved (ion trap) LC-MS data, which are then linked to their fragment ions for database searching using ProSight. Automated determination of human keratin and tubulin isoforms is one result. Optimized for the intricacies of whole proteins, new software modules visualize proteome-scale data based on the LC retention time and intensity of intact masses and enable selective detection of PTMs to automatically screen for acetylation, phosphorylation, and methylation. Software functionality was demonstrated using comparative LC-MS data from yeast strains in addition to human cells undergoing chemical stress. We further these advances as a key aspect of realizing Top-Down MS on a proteomic scale. PMID:20848673

  13. Stochastic resonance in attention control

    NASA Astrophysics Data System (ADS)

    Kitajo, K.; Yamanaka, K.; Ward, L. M.; Yamamoto, Y.

    2006-12-01

    We investigated the beneficial role of noise in a human higher brain function, namely visual attention control. We asked subjects to detect a weak gray-level target inside a marker box either in the left or the right visual field. Signal detection performance was optimized by presenting a low level of randomly flickering gray-level noise between and outside the two possible target locations. Further, we found that an increase in eye movement (saccade) rate helped to compensate for the usual deterioration in detection performance at higher noise levels. To our knowledge, this is the first experimental evidence that noise can optimize a higher brain function which involves distinct brain regions above the level of primary sensory systems -- switching behavior between multi-stable attention states -- via the mechanism of stochastic resonance.

  14. Social Class and the Motivational Relevance of Other Human Beings: Evidence From Visual Attention.

    PubMed

    Dietze, Pia; Knowles, Eric D

    2016-11-01

    We theorize that people's social class affects their appraisals of others' motivational relevance-the degree to which others are seen as potentially rewarding, threatening, or otherwise worth attending to. Supporting this account, three studies indicate that social classes differ in the amount of attention their members direct toward other human beings. In Study 1, wearable technology was used to film the visual fields of pedestrians on city streets; higher-class participants looked less at other people than did lower-class participants. In Studies 2a and 2b, participants' eye movements were tracked while they viewed street scenes; higher class was associated with reduced attention to people in the images. In Study 3, a change-detection procedure assessed the degree to which human faces spontaneously attract visual attention; faces proved less effective at drawing the attention of high-class than low-class participants, which implies that class affects spontaneous relevance appraisals. The measurement and conceptualization of social class are discussed. © The Author(s) 2016.

  15. Neocortical Rebound Depolarization Enhances Visual Perception

    PubMed Central

    Funayama, Kenta; Ban, Hiroshi; Chan, Allen W.; Matsuki, Norio; Murphy, Timothy H.; Ikegaya, Yuji

    2015-01-01

    Animals are constantly exposed to the time-varying visual world. Because visual perception is modulated by immediately prior visual experience, visual cortical neurons may register recent visual history into a specific form of offline activity and link it to later visual input. To examine how preceding visual inputs interact with upcoming information at the single neuron level, we designed a simple stimulation protocol in which a brief, orientated flashing stimulus was subsequently coupled to visual stimuli with identical or different features. Using in vivo whole-cell patch-clamp recording and functional two-photon calcium imaging from the primary visual cortex (V1) of awake mice, we discovered that a flash of sinusoidal grating per se induces an early, transient activation as well as a long-delayed reactivation in V1 neurons. This late response, which started hundreds of milliseconds after the flash and persisted for approximately 2 s, was also observed in human V1 electroencephalogram. When another drifting grating stimulus arrived during the late response, the V1 neurons exhibited a sublinear, but apparently increased response, especially to the same grating orientation. In behavioral tests of mice and humans, the flashing stimulation enhanced the detection power of the identically orientated visual stimulation only when the second stimulation was presented during the time window of the late response. Therefore, V1 late responses likely provide a neural basis for admixing temporally separated stimuli and extracting identical features in time-varying visual environments. PMID:26274866

  16. Visual Short-Term Memory Compared in Rhesus Monkeys and Humans

    PubMed Central

    Elmore, L. Caitlin; Ma, Wei Ji; Magnotti, John F.; Leising, Kenneth J.; Passaro, Antony D.; Katz, Jeffrey S.; Wright, Anthony A.

    2011-01-01

    Summary Change detection is a popular task to study visual short-term memory (STM) in humans [1–4]. Much of this work suggests that STM has a fixed capacity of 4 ± 1 items [1–6]. Here we report the first comparison of change detection memory between humans and a species closely related to humans, the rhesus monkey. Monkeys and humans were tested in nearly identical procedures with overlapping display sizes. Although the monkeys’ STM was well fit by a 1-item fixed-capacity memory model, other monkey memory tests with 4-item lists have shown performance impossible to obtain with a 1-item capacity [7]. We suggest that this contradiction can be resolved using a continuous-resource approach more closely tied to the neural basis of memory [8,9]. In this view, items have a noisy memory representation whose noise level depends on display size due to distributed allocation of a continuous resource. In accord with this theory, we show that performance depends on the perceptual distance between items before and after the change, and d′ depends on display size in an approximately power law fashion. Our results open the door to combining the power of psychophysics, computation, and physiology to better understand the neural basis of STM. PMID:21596568

  17. Calcification detection of abdominal aorta in CT images and 3D visualization in VR devices.

    PubMed

    Garcia-Berna, Jose A; Sanchez-Gomez, Juan M; Hermanns, Judith; Garcia-Mateos, Gines; Fernandez-Aleman, Jose L

    2016-08-01

    Automatic calcification detection in abdominal aorta consists of a set of computer vision techniques to quantify the amount of calcium that is found around this artery. Knowing that information, it is possible to perform statistical studies that relate vascular diseases with the presence of calcium in these structures. To facilitate the detection in CT images, a contrast is usually injected into the circulatory system of the patients to distinguish the aorta from other body tissues and organs. This contrast increases the absorption of X-rays by human blood, making it easier the measurement of calcifications. Based on this idea, a new system capable of detecting and tracking the aorta artery has been developed with an estimation of the calcium found surrounding the aorta. Besides, the system is complemented with a 3D visualization mode of the image set which is designed for the new generation of immersive VR devices.

  18. Integrating conflict detection and attentional control mechanisms.

    PubMed

    Walsh, Bong J; Buonocore, Michael H; Carter, Cameron S; Mangun, George R

    2011-09-01

    Human behavior involves monitoring and adjusting performance to meet established goals. Performance-monitoring systems that act by detecting conflict in stimulus and response processing have been hypothesized to influence cortical control systems to adjust and improve performance. Here we used fMRI to investigate the neural mechanisms of conflict monitoring and resolution during voluntary spatial attention. We tested the hypothesis that the ACC would be sensitive to conflict during attentional orienting and influence activity in the frontoparietal attentional control network that selectively modulates visual information processing. We found that activity in ACC increased monotonically with increasing attentional conflict. This increased conflict detection activity was correlated with both increased activity in the attentional control network and improved speed and accuracy from one trial to the next. These results establish a long hypothesized interaction between conflict detection systems and neural systems supporting voluntary control of visual attention.

  19. Basic multisensory functions can be acquired after congenital visual pattern deprivation in humans.

    PubMed

    Putzar, Lisa; Gondan, Matthias; Röder, Brigitte

    2012-01-01

    People treated for bilateral congenital cataracts offer a model to study the influence of visual deprivation in early infancy on visual and multisensory development. We investigated cross-modal integration capabilities in cataract patients using a simple detection task that provided redundant information to two different senses. In both patients and controls, redundancy gains were consistent with coactivation models, indicating an integrated processing of modality-specific information. This finding is in contrast with recent studies showing impaired higher-level multisensory interactions in cataract patients. The present results suggest that basic cross-modal integrative processes for simple short stimuli do not depend on visual and/or crossmodal input since birth.

  20. Eccentricity mapping of the human visual cortex to evaluate temporal dynamics of functional T1ρ mapping.

    PubMed

    Heo, Hye-Young; Wemmie, John A; Johnson, Casey P; Thedens, Daniel R; Magnotta, Vincent A

    2015-07-01

    Recent experiments suggest that T1 relaxation in the rotating frame (T(1ρ)) is sensitive to metabolism and can detect localized activity-dependent changes in the human visual cortex. Current functional magnetic resonance imaging (fMRI) methods have poor temporal resolution due to delays in the hemodynamic response resulting from neurovascular coupling. Because T(1ρ) is sensitive to factors that can be derived from tissue metabolism, such as pH and glucose concentration via proton exchange, we hypothesized that activity-evoked T(1ρ) changes in visual cortex may occur before the hemodynamic response measured by blood oxygenation level-dependent (BOLD) and arterial spin labeling (ASL) contrast. To test this hypothesis, functional imaging was performed using T(1ρ), BOLD, and ASL in human participants viewing an expanding ring stimulus. We calculated eccentricity phase maps across the occipital cortex for each functional signal and compared the temporal dynamics of T(1ρ) versus BOLD and ASL. The results suggest that T(1ρ) changes precede changes in the two blood flow-dependent measures. These observations indicate that T(1ρ) detects a signal distinct from traditional fMRI contrast methods. In addition, these findings support previous evidence that T(1ρ) is sensitive to factors other than blood flow, volume, or oxygenation. Furthermore, they suggest that tissue metabolism may be driving activity-evoked T(1ρ) changes.

  1. Brain processing of visual information during fast eye movements maintains motor performance.

    PubMed

    Panouillères, Muriel; Gaveau, Valérie; Socasau, Camille; Urquizar, Christian; Pélisson, Denis

    2013-01-01

    Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution--by shifting a visual target at saccade onset and blanking it at saccade offset--induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.

  2. Integrating visual learning within a model-based ATR system

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark; Nebrich, Mark

    2017-05-01

    Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.

  3. Improving visual perception through neurofeedback

    PubMed Central

    Scharnowski, Frank; Hutton, Chloe; Josephs, Oliver; Weiskopf, Nikolaus; Rees, Geraint

    2012-01-01

    Perception depends on the interplay of ongoing spontaneous activity and stimulus-evoked activity in sensory cortices. This raises the possibility that training ongoing spontaneous activity alone might be sufficient for enhancing perceptual sensitivity. To test this, we trained human participants to control ongoing spontaneous activity in circumscribed regions of retinotopic visual cortex using real-time functional MRI based neurofeedback. After training, we tested participants using a new and previously untrained visual detection task that was presented at the visual field location corresponding to the trained region of visual cortex. Perceptual sensitivity was significantly enhanced only when participants who had previously learned control over ongoing activity were now exercising control, and only for that region of visual cortex. Our new approach allows us to non-invasively and non-pharmacologically manipulate regionally specific brain activity, and thus provide ‘brain training’ to deliver particular perceptual enhancements. PMID:23223302

  4. Diversification of visual media retrieval results using saliency detection

    NASA Astrophysics Data System (ADS)

    Muratov, Oleg; Boato, Giulia; De Natale, Franesco G. B.

    2013-03-01

    Diversification of retrieval results allows for better and faster search. Recently there has been proposed different methods for diversification of image retrieval results mainly utilizing text information and techniques imported from natural language processing domain. However, images contain visual information that is impossible to describe in text and the use of visual features is inevitable. Visual saliency is information about the main object of an image implicitly included by humans while creating visual content. For this reason it is naturally to exploit this information for the task of diversification of the content. In this work we study whether visual saliency can be used for the task of diversification and propose a method for re-ranking image retrieval results using saliency. The evaluation has shown that the use of saliency information results in higher diversity of retrieval results.

  5. Spatial frequency characteristics at image decision-point locations for observers with different radiological backgrounds in lung nodule detection

    NASA Astrophysics Data System (ADS)

    Pietrzyk, Mariusz W.; Manning, David J.; Dix, Alan; Donovan, Tim

    2009-02-01

    Aim: The goal of the study is to determine the spatial frequency characteristics at locations in the image of overt and covert observers' decisions and find out if there are any similarities in different observers' groups: the same radiological experience group or the same accuracy scored level. Background: The radiological task is described as a visual searching decision making procedure involving visual perception and cognitive processing. Humans perceive the world through a number of spatial frequency channels, each sensitive to visual information carried by different spatial frequency ranges and orientations. Recent studies have shown that particular physical properties of local and global image-based elements are correlated with the performance and the level of experience of human observers in breast cancer and lung nodule detections. Neurological findings in visual perception were an inspiration for wavelet applications in vision research because the methodology tries to mimic the brain processing algorithms. Methods: The wavelet approach to the set of postero-anterior chest radiographs analysis has been used to characterize perceptual preferences observers with different levels of experience in the radiological task. Psychophysical methodology has been applied to track eye movements over the image, where particular ROIs related to the observers' fixation clusters has been analysed in the spaces frame by Daubechies functions. Results: Significance differences have been found between the spatial frequency characteristics at the location of different decisions.

  6. Peripheral detection and resolution with mid-/long-wavelength and short-wavelength sensitive cone systems.

    PubMed

    Zhu, Hai-Feng; Zele, Andrew J; Suheimat, Marwan; Lambert, Andrew J; Atchison, David A

    2016-08-01

    This study compared neural resolution and detection limits of the human mid-/long-wavelength and short-wavelength cone systems with anatomical estimates of photoreceptor and retinal ganglion cell spacings and sizes. Detection and resolution limits were measured from central fixation out to 35° eccentricity across the horizontal visual field using a modified Lotmar interferometer. The mid-/long-wavelength cone system was studied using a green (550 nm) test stimulus to which S-cones have low sensitivity. To bias resolution and detection to the short-wavelength cone system, a blue (450 nm) test stimulus was presented against a bright yellow background that desensitized the M- and L-cones. Participants were three trichromatic males with normal visual functions. With green stimuli, resolution showed a steep central-peripheral gradient that was similar between participants, whereas the detection gradient was shallower and patterns were different between participants. Detection and resolution with blue stimuli were poorer than for green stimuli. The detection of blue stimuli was superior to resolution across the horizontal visual field and the patterns were different between participants. The mid-/long-wavelength cone system's resolution is limited by midget ganglion cell spacing and its detection is limited by the size of the M- and L-cone photoreceptors, consistent with previous observations. We found that no such simple relationships occur for the short-wavelength cone system between resolution and the bistratified ganglion cell spacing, nor between detection and the S-cone photoreceptor sizes.

  7. Why humans do not make good vampires. Testing the ability of humans to detect true blood.

    PubMed

    De Smet, Delphine; Van Speybroeck, Linda; Verplaetse, Jan

    2012-01-01

    Research indicating the effects of real blood or of its iconic representation on human behaviour has thus far concentrated on phobia and aggressiveness. Little is known about other responses or, more fundamentally, about the biological basis of all such responses. In this study it is examined whether or not humans are able to detect real blood. Human subjects (n = 89) were asked to distinguish different kinds of blood from red control fluids under varying visual and choice conditions. Relevant differences between subjects were tested for through written questionnaires, including standardized scales for disgust sensitivity (DS-R) and blood phobia (MBPI) and performance on two clinical olfactory tests. Analysis of variance shows that humans are excellent detectors of animal blood (in casu pig blood), whereas the ability of detecting human blood is much less developed. Surprisingly, differences in olfactory capacities and personal experience with blood have no effect on blood detection, while blood fear lowers and disgust sensitivity ameliorates this performance. This study allows further mapping of the exact role of disgust sensitivity in human behaviour, as well as a deliberate choice of materials in blood-related experiments. It is imperative for further research on the behavioural and psychological impact 'blood' resorts on humans.

  8. Evidence for Feature and Location Learning in Human Visual Perceptual Learning

    ERIC Educational Resources Information Center

    Moreno-Fernández, María Manuela; Salleh, Nurizzati Mohd; Prados, Jose

    2015-01-01

    In Experiment 1, human participants were pre-exposed to two similar checkerboard grids (AX and X) in alternation, and to a third grid (BX) in a separate block of trials. In a subsequent test, the unique feature A was better detected than the feature B when they were presented in the same location during the pre-exposure and test phases. However,…

  9. Hydride Generation for Headspace Solid-Phase Extraction with CdTe Quantum Dots Immobilized on Paper for Sensitive Visual Detection of Selenium.

    PubMed

    Huang, Ke; Xu, Kailai; Zhu, Wei; Yang, Lu; Hou, Xiandeng; Zheng, Chengbin

    2016-01-05

    A low-cost, simple, and highly selective analytical method was developed for sensitive visual detection of selenium in human urine both outdoors and at home, by coupling hydride generation with headspace solid-phase extraction using quantum dots (QDs) immobilized on paper. The visible fluorescence from the CdTe QDs immobilized on paper was quenched by H2Se from hydride generation reaction and headspace solid-phase extraction. The potential mechanism was investigated by using X-ray diffraction (XRD) and X-ray photoelectron spectroscopy (XPS) as well as Density Functional Theory (DFT). Potential interferences from coexisting ions, particularly Ag(+), Cu(2+), and Zn(2+), were eliminated. The selectivity was significantly increased because the selenium hydride was effectively separated from sample matrices by hydride generation. Moreover, due to the high sampling efficiency of hydride generation and headspace solid phase extraction, the sensitivity and the limit of detection (LOD) were significantly improved compared to conventional methods. A LOD of 0.1 μg L(-1) and a relative standard deviation (RSD, n = 7) of 2.4% at a concentration of 20 μg L(-1) were obtained when using a commercial spectrofluorometer as the detector. Furthermore, a visual assay based on the proposed method was developed for the detection of Se, 5 μg L(-1) of selenium in urine can be discriminated from the blank solution with the naked eye. The proposed method was validated by analysis of certified reference materials and human urine samples with satisfactory results.

  10. GAFFE: a gaze-attentive fixation finding engine.

    PubMed

    Rajashekar, U; van der Linde, I; Bovik, A C; Cormack, L K

    2008-04-01

    The ability to automatically detect visually interesting regions in images has many practical applications, especially in the design of active machine vision and automatic visual surveillance systems. Analysis of the statistics of image features at observers' gaze can provide insights into the mechanisms of fixation selection in humans. Using a foveated analysis framework, we studied the statistics of four low-level local image features: luminance, contrast, and bandpass outputs of both luminance and contrast, and discovered that image patches around human fixations had, on average, higher values of each of these features than image patches selected at random. Contrast-bandpass showed the greatest difference between human and random fixations, followed by luminance-bandpass, RMS contrast, and luminance. Using these measurements, we present a new algorithm that selects image regions as likely candidates for fixation. These regions are shown to correlate well with fixations recorded from human observers.

  11. Determining the orientation of depth-rotated familiar objects.

    PubMed

    Niimi, Ryosuke; Yokosawa, Kazuhiko

    2008-02-01

    How does the human visual system determine the depth-orientation of familiar objects? We examined reaction times and errors in the detection of 15 degrees differences in the depth orientations of two simultaneously presented familiar objects, which were the same objects (Experiment 1) or different objects (Experiment 2). Detection of orientation differences was best for 0 degrees (front) and 180 degrees (back), while 45 degrees and 135 degrees yielded poorer results, and 90 degrees (side) showed intermediate results, suggesting that the visual system is tuned for front, side and back orientations. We further found that those advantages are due to orientation-specific features such as horizontal linear contours and symmetry, since the 90 degrees advantage was absent for objects with curvilinear contours, and asymmetric object diminished the 0 degrees and 180 degrees advantages. We conclude that the efficiency of visually determining object orientation is highly orientation-dependent, and object orientation may be perceived in favor of front-back axes.

  12. Semi supervised Learning of Feature Hierarchies for Object Detection in a Video (Open Access)

    DTIC Science & Technology

    2013-10-03

    dataset: PETS2009 Dataset, Oxford Town Center dataset [3], PNNL Parking Lot datasets [25] and CAVIAR cols1 dataset [1] for human detection. Be- sides, we...level features from TownCen- ter, ParkingLot, PETS09 and CAVIAR . As we can see that, the four set of features are visually very different from each other...information is more distinguished for detecting a person in the TownCen- ter than CAVIAR . Comparing figure 5(a) with 6(a), interest- ingly, the color

  13. Automated reference-free detection of motion artifacts in magnetic resonance images.

    PubMed

    Küstner, Thomas; Liebgott, Annika; Mauch, Lukas; Martirosian, Petros; Bamberg, Fabian; Nikolaou, Konstantin; Yang, Bin; Schick, Fritz; Gatidis, Sergios

    2018-04-01

    Our objectives were to provide an automated method for spatially resolved detection and quantification of motion artifacts in MR images of the head and abdomen as well as a quality control of the trained architecture. T1-weighted MR images of the head and the upper abdomen were acquired in 16 healthy volunteers under rest and under motion. Images were divided into overlapping patches of different sizes achieving spatial separation. Using these patches as input data, a convolutional neural network (CNN) was trained to derive probability maps for the presence of motion artifacts. A deep visualization offers a human-interpretable quality control of the trained CNN. Results were visually assessed on probability maps and as classification accuracy on a per-patch, per-slice and per-volunteer basis. On visual assessment, a clear difference of probability maps was observed between data sets with and without motion. The overall accuracy of motion detection on a per-patch/per-volunteer basis reached 97%/100% in the head and 75%/100% in the abdomen, respectively. Automated detection of motion artifacts in MRI is feasible with good accuracy in the head and abdomen. The proposed method provides quantification and localization of artifacts as well as a visualization of the learned content. It may be extended to other anatomic areas and used for quality assurance of MR images.

  14. Robotic Attention Processing And Its Application To Visual Guidance

    NASA Astrophysics Data System (ADS)

    Barth, Matthew; Inoue, Hirochika

    1988-03-01

    This paper describes a method of real-time visual attention processing for robots performing visual guidance. This robot attention processing is based on a novel vision processor, the multi-window vision system that was developed at the University of Tokyo. The multi-window vision system is unique in that it only processes visual information inside local area windows. These local area windows are quite flexible in their ability to move anywhere on the visual screen, change their size and shape, and alter their pixel sampling rate. By using these windows for specific attention tasks, it is possible to perform high speed attention processing. The primary attention skills of detecting motion, tracking an object, and interpreting an image are all performed at high speed on the multi-window vision system. A basic robotic attention scheme using the attention skills was developed. The attention skills involved detection and tracking of salient visual features. The tracking and motion information thus obtained was utilized in producing the response to the visual stimulus. The response of the attention scheme was quick enough to be applicable to the real-time vision processing tasks of playing a video 'pong' game, and later using an automobile driving simulator. By detecting the motion of a 'ball' on a video screen and then tracking the movement, the attention scheme was able to control a 'paddle' in order to keep the ball in play. The response was faster than that of a human's, allowing the attention scheme to play the video game at higher speeds. Further, in the application to the driving simulator, the attention scheme was able to control both direction and velocity of a simulated vehicle following a lead car. These two applications show the potential of local visual processing in its use for robotic attention processing.

  15. Optical Coherence Tomography and Autofluorescence Imaging of Human Tonsil

    PubMed Central

    Pahlevaninezhad, Hamid; Lee, Anthony M. D.; Rosin, Miriam; Sun, Ivan; Zhang, Lewei; Hakimi, Mehrnoush; MacAulay, Calum; Lane, Pierre M.

    2014-01-01

    For the first time, we present co-registered autofluorescence imaging and optical coherence tomography (AF/OCT) of excised human palatine tonsils to evaluate the capabilities of OCT to visualize tonsil tissue components. Despite limited penetration depth, OCT can provide detailed structural information about tonsil tissue with much higher resolution than that of computed tomography, magnetic resonance imaging, and Ultrasound. Different tonsil tissue components such as epithelium, dense connective tissue, lymphoid nodules, and crypts can be visualized by OCT. The co-registered AF imaging can provide matching biochemical information. AF/OCT scans may provide a non-invasive tool for detecting tonsillar cancers and for studying the natural history of their development. PMID:25542010

  16. Visualization of hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Hogervorst, Maarten A.; Bijl, Piet; Toet, Alexander

    2007-04-01

    We developed four new techniques to visualize hyper spectral image data for man-in-the-loop target detection. The methods respectively: (1) display the subsequent bands as a movie ("movie"), (2) map the data onto three channels and display these as a colour image ("colour"), (3) display the correlation between the pixel signatures and a known target signature ("match") and (4) display the output of a standard anomaly detector ("anomaly"). The movie technique requires no assumptions about the target signature and involves no information loss. The colour technique produces a single image that can be displayed in real-time. A disadvantage of this technique is loss of information. A display of the match between a target signature and pixels and can be interpreted easily and fast, but this technique relies on precise knowledge of the target signature. The anomaly detector signifies pixels with signatures that deviate from the (local) background. We performed a target detection experiment with human observers to determine their relative performance with the four techniques,. The results show that the "match" presentation yields the best performance, followed by "movie" and "anomaly", while performance with the "colour" presentation was the poorest. Each scheme has its advantages and disadvantages and is more or less suited for real-time and post-hoc processing. The rationale is that the final interpretation is best done by a human observer. In contrast to automatic target recognition systems, the interpretation of hyper spectral imagery by the human visual system is robust to noise and image transformations and requires a minimal number of assumptions (about signature of target and background, target shape etc.) When more knowledge about target and background is available this may be used to help the observer interpreting the data (aided target detection).

  17. Real-time 3D visualization of volumetric video motion sensor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, J.; Stansfield, S.; Shawver, D.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less

  18. On detection and visualization techniques for cyber security situation awareness

    NASA Astrophysics Data System (ADS)

    Yu, Wei; Wei, Shixiao; Shen, Dan; Blowers, Misty; Blasch, Erik P.; Pham, Khanh D.; Chen, Genshe; Zhang, Hanlin; Lu, Chao

    2013-05-01

    Networking technologies are exponentially increasing to meet worldwide communication requirements. The rapid growth of network technologies and perversity of communications pose serious security issues. In this paper, we aim to developing an integrated network defense system with situation awareness capabilities to present the useful information for human analysts. In particular, we implement a prototypical system that includes both the distributed passive and active network sensors and traffic visualization features, such as 1D, 2D and 3D based network traffic displays. To effectively detect attacks, we also implement algorithms to transform real-world data of IP addresses into images and study the pattern of attacks and use both the discrete wavelet transform (DWT) based scheme and the statistical based scheme to detect attacks. Through an extensive simulation study, our data validate the effectiveness of our implemented defense system.

  19. DNA Data Visualization (DDV): Software for Generating Web-Based Interfaces Supporting Navigation and Analysis of DNA Sequence Data of Entire Genomes.

    PubMed

    Neugebauer, Tomasz; Bordeleau, Eric; Burrus, Vincent; Brzezinski, Ryszard

    2015-01-01

    Data visualization methods are necessary during the exploration and analysis activities of an increasingly data-intensive scientific process. There are few existing visualization methods for raw nucleotide sequences of a whole genome or chromosome. Software for data visualization should allow the researchers to create accessible data visualization interfaces that can be exported and shared with others on the web. Herein, novel software developed for generating DNA data visualization interfaces is described. The software converts DNA data sets into images that are further processed as multi-scale images to be accessed through a web-based interface that supports zooming, panning and sequence fragment selection. Nucleotide composition frequencies and GC skew of a selected sequence segment can be obtained through the interface. The software was used to generate DNA data visualization of human and bacterial chromosomes. Examples of visually detectable features such as short and long direct repeats, long terminal repeats, mobile genetic elements, heterochromatic segments in microbial and human chromosomes, are presented. The software and its source code are available for download and further development. The visualization interfaces generated with the software allow for the immediate identification and observation of several types of sequence patterns in genomes of various sizes and origins. The visualization interfaces generated with the software are readily accessible through a web browser. This software is a useful research and teaching tool for genetics and structural genomics.

  20. A new fluorescence/PET probe for targeting intracellular human telomerase reverse transcriptase (hTERT) using Tat peptide-conjugated IgM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jung, Kyung oh; Biomedical Sciences, Seoul National University College of Medicine; Cancer Research Institute, Seoul National University College of Medicine

    Despite an increasing need for methods to visualize intracellular proteins in vivo, the majority of antibody-based imaging methods available can only detect membrane proteins. The human telomerase reverse transcriptase (hTERT) is an intracellular target of great interest because of its high expression in several types of cancer. In this study, we developed a new probe for hTERT using the Tat peptide. An hTERT antibody (IgG or IgM) was conjugated with the Tat peptide, a fluorescence dye and {sup 64}Cu. HT29 (hTERT+) and U2OS (hTERT−) were used to visualize the intracellular hTERT. The hTERT was detected by RT-PCR and western blot. Fluorescencemore » signals for hTERT were obtained by confocal microscopy, live cell imaging, and analyzed by Tissue-FAXS. In nude mice, tumors were visualized using the fluorescence imaging devices Maestro™ and PETBOX. In RT-PCR and western blot, the expression of hTERT was detected in HT29 cells, but not in U2OS cells. Fluorescence signals were clearly observed in HT29 cells and in U2OS cells after 1 h of treatment, but signals were only detected in HT29 cells after 24 h. Confocal microscopy showed that 9.65% of U2OS and 78.54% of HT29 cells had positive hTERT signals. 3D animation images showed that the probe could target intranuclear hTERT in the nucleus. In mice models, fluorescence and PET imaging showed that hTERT in HT29 tumors could be efficiently visualized. In summary, we developed a new method to visualize intracellular and intranuclear proteins both in vitro and in vivo. - Highlights: • We developed new probes for imaging hTERT using Tat-conjugated IgM antibodies labeled with a fluorescent dye and radioisotope. • This probes could be used to overcome limitation of conventional antibody imaging system in live cell imaging. • This system could be applicable to monitor intracellular and intranuclear proteins in vitro and in vivo.« less

  1. Behavior and neural basis of near-optimal visual search

    PubMed Central

    Ma, Wei Ji; Navalpakkam, Vidhya; Beck, Jeffrey M; van den Berg, Ronald; Pouget, Alexandre

    2013-01-01

    The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance. PMID:21552276

  2. Early suppression effect in human primary visual cortex during Kanizsa illusion processing: A magnetoencephalographic evidence.

    PubMed

    Chernyshev, Boris V; Pronko, Platon K; Stroganova, Tatiana A

    2016-01-01

    Detection of illusory contours (ICs) such as Kanizsa figures is known to depend primarily upon the lateral occipital complex. Yet there is no universal agreement on the role of the primary visual cortex in this process; some existing evidence hints that an early stage of the visual response in V1 may involve relative suppression to Kanizsa figures compared with controls. Iso-oriented luminance borders, which are responsible for Kanizsa illusion, may evoke surround suppression in V1 and adjacent areas leading to the reduction in the initial response to Kanizsa figures. We attempted to test the existence, as well as to find localization and timing of the early suppression effect produced by Kanizsa figures in adult nonclinical human participants. We used two sizes of visual stimuli (4.5 and 9.0°) in order to probe the effect at two different levels of eccentricity; the stimuli were presented centrally in passive viewing conditions. We recorded magnetoencephalogram, which is more sensitive than electroencephalogram to activity originating from V1 and V2 areas. We restricted our analysis to the medial occipital area and the occipital pole, and to a 40-120 ms time window after the stimulus onset. By applying threshold-free cluster enhancement technique in combination with permutation statistics, we were able to detect the inverted IC effect-a relative suppression of the response to the Kanizsa figures compared with the control stimuli. The current finding is highly compatible with the explanation involving surround suppression evoked by iso-oriented collinear borders. The effect may be related to the principle of sparse coding, according to which V1 suppresses representations of inner parts of collinear assemblies as being informationally redundant. Such a mechanism is likely to be an important preliminary step preceding object contour detection.

  3. Close similarity between spatiotemporal frequency tunings of human cortical responses and involuntary manual following responses to visual motion.

    PubMed

    Amano, Kaoru; Kimura, Toshitaka; Nishida, Shin'ya; Takeda, Tsunehiro; Gomi, Hiroaki

    2009-02-01

    Human brain uses visual motion inputs not only for generating subjective sensation of motion but also for directly guiding involuntary actions. For instance, during arm reaching, a large-field visual motion is quickly and involuntarily transformed into a manual response in the direction of visual motion (manual following response, MFR). Previous attempts to correlate motion-evoked cortical activities, revealed by brain imaging techniques, with conscious motion perception have resulted only in partial success. In contrast, here we show a surprising degree of similarity between the MFR and the population neural activity measured by magnetoencephalography (MEG). We measured the MFR and MEG induced by the same motion onset of a large-field sinusoidal drifting grating with changing the spatiotemporal frequency of the grating. The initial transient phase of these two responses had very similar spatiotemporal tunings. Specifically, both the MEG and MFR amplitudes increased as the spatial frequency was decreased to, at most, 0.05 c/deg, or as the temporal frequency was increased to, at least, 10 Hz. We also found in peak latency a quantitative agreement (approximately 100-150 ms) and correlated changes against spatiotemporal frequency changes between MEG and MFR. In comparison with these two responses, conscious visual motion detection is known to be most sensitive (i.e., have the lowest detection threshold) at higher spatial frequencies and have longer and more variable response latencies. Our results suggest a close relationship between the properties of involuntary motor responses and motion-evoked cortical activity as reflected by the MEG.

  4. Direct detection of a single photon by humans

    PubMed Central

    Tinsley, Jonathan N.; Molodtsov, Maxim I.; Prevedel, Robert; Wartmann, David; Espigulé-Pons, Jofre; Lauwers, Mattias; Vaziri, Alipasha

    2016-01-01

    Despite investigations for over 70 years, the absolute limits of human vision have remained unclear. Rod cells respond to individual photons, yet whether a single-photon incident on the eye can be perceived by a human subject has remained a fundamental open question. Here we report that humans can detect a single-photon incident on the cornea with a probability significantly above chance. This was achieved by implementing a combination of a psychophysics procedure with a quantum light source that can generate single-photon states of light. We further discover that the probability of reporting a single photon is modulated by the presence of an earlier photon, suggesting a priming process that temporarily enhances the effective gain of the visual system on the timescale of seconds. PMID:27434854

  5. Visual and Auditory Sensitivities and Discriminations

    DTIC Science & Technology

    2003-03-03

    Experimental Psychology : Human Perception and Performance, 26, 1721-1723. The data have also been reported to ECVP at the Trieste meeting, and to the Edinburgh...design to measure the disparity required to just detect the cyclopean test bars (Macmillan & Creelman , 1991). Each trial consisted of a single...conventionally (Macmillan & Creelman , 1991). Results Grating detection threshold (d =1.0) for observer 1 was estimated as 0.18 arc min peak-to-trough

  6. PFIESTERIA PISCICIDA-INDUCED COGNITIVE EFFECTS: VISUAL SIGNAL DETECTION PERFORMANCE AND REVERSAL.

    EPA Science Inventory

    Humans exposed to Pfiesteria piscicida report cognitive impairment. In a rat model, we showed that exposure to Pfiesteria impaired learning a new task, but not performance of previously-learned behavior. In this study, we characterized the behavioral effects of Pfiesteria in rats...

  7. Can invertebrates see the e-vector of polarization as a separate modality of light?

    PubMed

    Labhart, Thomas

    2016-12-15

    The visual world is rich in linearly polarized light stimuli, which are hidden from the human eye. But many invertebrate species make use of polarized light as a source of valuable visual information. However, exploiting light polarization does not necessarily imply that the electric (e)-vector orientation of polarized light can be perceived as a separate modality of light. In this Review, I address the question of whether invertebrates can detect specific e-vector orientations in a manner similar to that of humans perceiving spectral stimuli as specific hues. To analyze e-vector orientation, the signals of at least three polarization-sensitive sensors (analyzer channels) with different e-vector tuning axes must be compared. The object-based, imaging polarization vision systems of cephalopods and crustaceans, as well as the water-surface detectors of flying backswimmers, use just two analyzer channels. Although this excludes the perception of specific e-vector orientations, a two-channel system does provide a coarse, categoric analysis of polarized light stimuli, comparable to the limited color sense of dichromatic, 'color-blind' humans. The celestial compass of insects employs three or more analyzer channels. However, that compass is multimodal, i.e. e-vector information merges with directional information from other celestial cues, such as the solar azimuth and the spectral gradient in the sky, masking e-vector information. It seems that invertebrate organisms take no interest in the polarization details of visual stimuli, but polarization vision grants more practical benefits, such as improved object detection and visual communication for cephalopods and crustaceans, compass readings to traveling insects, or the alert 'water below!' to water-seeking bugs. © 2016. Published by The Company of Biologists Ltd.

  8. Can invertebrates see the e-vector of polarization as a separate modality of light?

    PubMed Central

    2016-01-01

    ABSTRACT The visual world is rich in linearly polarized light stimuli, which are hidden from the human eye. But many invertebrate species make use of polarized light as a source of valuable visual information. However, exploiting light polarization does not necessarily imply that the electric (e)-vector orientation of polarized light can be perceived as a separate modality of light. In this Review, I address the question of whether invertebrates can detect specific e-vector orientations in a manner similar to that of humans perceiving spectral stimuli as specific hues. To analyze e-vector orientation, the signals of at least three polarization-sensitive sensors (analyzer channels) with different e-vector tuning axes must be compared. The object-based, imaging polarization vision systems of cephalopods and crustaceans, as well as the water-surface detectors of flying backswimmers, use just two analyzer channels. Although this excludes the perception of specific e-vector orientations, a two-channel system does provide a coarse, categoric analysis of polarized light stimuli, comparable to the limited color sense of dichromatic, ‘color-blind’ humans. The celestial compass of insects employs three or more analyzer channels. However, that compass is multimodal, i.e. e-vector information merges with directional information from other celestial cues, such as the solar azimuth and the spectral gradient in the sky, masking e-vector information. It seems that invertebrate organisms take no interest in the polarization details of visual stimuli, but polarization vision grants more practical benefits, such as improved object detection and visual communication for cephalopods and crustaceans, compass readings to traveling insects, or the alert ‘water below!’ to water-seeking bugs. PMID:27974532

  9. [application of the analytical transmission electron microscopy techniques for detection, identification and visualization of localization of nanoparticles of titanium and cerium oxides in mammalian cells].

    PubMed

    Shebanova, A S; Bogdanov, A G; Ismagulova, T T; Feofanov, A V; Semenyuk, P I; Muronets, V I; Erokhina, M V; Onishchenko, G E; Kirpichnikov, M P; Shaitan, K V

    2014-01-01

    This work represents the results of the study on applicability of the modern methods of analytical transmission electron microscopy for detection, identification and visualization of localization of nanoparticles of titanium and cerium oxides in A549 cell, human lung adenocarcinoma cell line. A comparative analysis of images of the nanoparticles in the cells obtained in the bright field mode of transmission electron microscopy, under dark-field scanning transmission electron microscopy and high-angle annular dark field scanning transmission electron was performed. For identification of nanoparticles in the cells the analytical techniques, energy-dispersive X-ray spectroscopy and electron energy loss spectroscopy, were compared when used in the mode of obtaining energy spectrum from different particles and element mapping. It was shown that the method for electron tomography is applicable to confirm that nanoparticles are localized in the sample but not coated by contamination. The possibilities and fields of utilizing different techniques for analytical transmission electron microscopy for detection, visualization and identification of nanoparticles in the biological samples are discussed.

  10. Object form discontinuity facilitates displacement discrimination across saccades.

    PubMed

    Demeyer, Maarten; De Graef, Peter; Wagemans, Johan; Verfaillie, Karl

    2010-06-01

    Stimulus displacements coinciding with a saccadic eye movement are poorly detected by human observers. In recent years, converging evidence has shown that this phenomenon does not result from poor transsaccadic retention of presaccadic stimulus position information, but from the visual system's efforts to spatially align presaccadic and postsaccadic perception on the basis of visual landmarks. It is known that this process can be disrupted, and transsaccadic displacement detection performance can be improved, by briefly blanking the stimulus display during and immediately after the saccade. In the present study, we investigated whether this improvement could also follow from a discontinuity in the task-irrelevant form of the displaced stimulus. We observed this to be the case: Subjects more accurately identified the direction of intrasaccadic displacements when the displaced stimulus simultaneously changed form, compared to conditions without a form change. However, larger improvements were still observed under blanking conditions. In a second experiment, we show that facilitation induced by form changes and blanks can combine. We conclude that a strong assumption of visual stability underlies the suppression of transsaccadic change detection performance, the rejection of which generalizes from stimulus form to stimulus position.

  11. Evidence for unlimited capacity processing of simple features in visual cortex

    PubMed Central

    White, Alex L.; Runeson, Erik; Palmer, John; Ernst, Zachary R.; Boynton, Geoffrey M.

    2017-01-01

    Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level–dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity. PMID:28654964

  12. The wisdom of crowds for visual search

    PubMed Central

    Juni, Mordechai Z.; Eckstein, Miguel P.

    2017-01-01

    Decision-making accuracy typically increases through collective integration of people’s judgments into group decisions, a phenomenon known as the wisdom of crowds. For simple perceptual laboratory tasks, classic signal detection theory specifies the upper limit for collective integration benefits obtained by weighted averaging of people’s confidences, and simple majority voting can often approximate that limit. Life-critical perceptual decisions often involve searching large image data (e.g., medical, security, and aerial imagery), but the expected benefits and merits of using different pooling algorithms are unknown for such tasks. Here, we show that expected pooling benefits are significantly greater for visual search than for single-location perceptual tasks and the prediction given by classic signal detection theory. In addition, we show that simple majority voting obtains inferior accuracy benefits for visual search relative to averaging and weighted averaging of observers’ confidences. Analysis of gaze behavior across observers suggests that the greater collective integration benefits for visual search arise from an interaction between the foveated properties of the human visual system (high foveal acuity and low peripheral acuity) and observers’ nonexhaustive search patterns, and can be predicted by an extended signal detection theory framework with trial to trial sampling from a varying mixture of high and low target detectabilities across observers (SDT-MIX). These findings advance our theoretical understanding of how to predict and enhance the wisdom of crowds for real world search tasks and could apply more generally to any decision-making task for which the minority of group members with high expertise varies from decision to decision. PMID:28490500

  13. Prestimulus EEG Power Predicts Conscious Awareness But Not Objective Visual Performance

    PubMed Central

    Veniero, Domenica

    2017-01-01

    Abstract Prestimulus oscillatory neural activity has been linked to perceptual outcomes during performance of psychophysical detection and discrimination tasks. Specifically, the power and phase of low frequency oscillations have been found to predict whether an upcoming weak visual target will be detected or not. However, the mechanisms by which baseline oscillatory activity influences perception remain unclear. Recent studies suggest that the frequently reported negative relationship between α power and stimulus detection may be explained by changes in detection criterion (i.e., increased target present responses regardless of whether the target was present/absent) driven by the state of neural excitability, rather than changes in visual sensitivity (i.e., more veridical percepts). Here, we recorded EEG while human participants performed a luminance discrimination task on perithreshold stimuli in combination with single-trial ratings of perceptual awareness. Our aim was to investigate whether the power and/or phase of prestimulus oscillatory activity predict discrimination accuracy and/or perceptual awareness on a trial-by-trial basis. Prestimulus power (3–28 Hz) was inversely related to perceptual awareness ratings (i.e., higher ratings in states of low prestimulus power/high excitability) but did not predict discrimination accuracy. In contrast, prestimulus oscillatory phase did not predict awareness ratings or accuracy in any frequency band. These results provide evidence that prestimulus α power influences the level of subjective awareness of threshold visual stimuli but does not influence visual sensitivity when a decision has to be made regarding stimulus features. Hence, we find a clear dissociation between the influence of ongoing neural activity on conscious awareness and objective performance. PMID:29255794

  14. Visual and semi-automatic non-invasive detection of interictal fast ripples: A potential biomarker of epilepsy in children with tuberous sclerosis complex.

    PubMed

    Bernardo, Danilo; Nariai, Hiroki; Hussain, Shaun A; Sankar, Raman; Salamon, Noriko; Krueger, Darcy A; Sahin, Mustafa; Northrup, Hope; Bebin, E Martina; Wu, Joyce Y

    2018-04-03

    We aim to establish that interictal fast ripples (FR; 250-500 Hz) are detectable on scalp EEG, and to investigate their association to epilepsy. Scalp EEG recordings of a subset of children with tuberous sclerosis complex (TSC)-associated epilepsy from two large multicenter observational TSC studies were analyzed and compared to control children without epilepsy or any other brain-based diagnoses. FR were identified both by human visual review and compared with semi-automated review utilizing a deep learning-based FR detector. Seven out of 7 children with TSC-associated epilepsy had scalp FR compared to 0 out of 4 children in the control group (p = 0.003). The automatic detector has a sensitivity of 98% and false positive rate with average of 11.2 false positives per minute. Non-invasive detection of interictal scalp FR was feasible, by both visual and semi-automatic detection. Interictal scalp FR occurred exclusively in children with TSC-associated epilepsy and were absent in controls without epilepsy. The proposed detector achieves high sensitivity of FR detection; however, expert review of the results to reduce false positives is advised. Interictal FR are detectable on scalp EEG and may potentially serve as a biomarker of epilepsy in children with TSC. Copyright © 2018 International Federation of Clinical Neurophysiology. All rights reserved.

  15. Review of fluorescence guided surgery visualization and overlay techniques

    PubMed Central

    Elliott, Jonathan T.; Dsouza, Alisha V.; Davis, Scott C.; Olson, Jonathan D.; Paulsen, Keith D.; Roberts, David W.; Pogue, Brian W.

    2015-01-01

    In fluorescence guided surgery, data visualization represents a critical step between signal capture and display needed for clinical decisions informed by that signal. The diversity of methods for displaying surgical images are reviewed, and a particular focus is placed on electronically detected and visualized signals, as required for near-infrared or low concentration tracers. Factors driving the choices such as human perception, the need for rapid decision making in a surgical environment, and biases induced by display choices are outlined. Five practical suggestions are outlined for optimal display orientation, color map, transparency/alpha function, dynamic range compression, and color perception check. PMID:26504628

  16. Evidence for auditory-visual processing specific to biological motion.

    PubMed

    Wuerger, Sophie M; Crocker-Buque, Alexander; Meyer, Georg F

    2012-01-01

    Biological motion is usually associated with highly correlated sensory signals from more than one modality: an approaching human walker will not only have a visual representation, namely an increase in the retinal size of the walker's image, but also a synchronous auditory signal since the walker's footsteps will grow louder. We investigated whether the multisensorial processing of biological motion is subject to different constraints than ecologically invalid motion. Observers were presented with a visual point-light walker and/or synchronised auditory footsteps; the walker was either approaching the observer (looming motion) or walking away (receding motion). A scrambled point-light walker served as a control. Observers were asked to detect the walker's motion as quickly and as accurately as possible. In Experiment 1 we tested whether the reaction time advantage due to redundant information in the auditory and visual modality is specific for biological motion. We found no evidence for such an effect: the reaction time reduction was accounted for by statistical facilitation for both biological and scrambled motion. In Experiment 2, we dissociated the auditory and visual information and tested whether inconsistent motion directions across the auditory and visual modality yield longer reaction times in comparison to consistent motion directions. Here we find an effect specific to biological motion: motion incongruency leads to longer reaction times only when the visual walker is intact and recognisable as a human figure. If the figure of the walker is abolished by scrambling, motion incongruency has no effect on the speed of the observers' judgments. In conjunction with Experiment 1 this suggests that conflicting auditory-visual motion information of an intact human walker leads to interference and thereby delaying the response.

  17. Toward interactive search in remote sensing imagery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, Reid B; Hush, Do; Harvey, Neal

    2010-01-01

    To move from data to information in almost all science and defense applications requires a human-in-the-loop to validate information products, resolve inconsistencies, and account for incomplete and potentially deceptive sources of information. This is a key motivation for visual analytics which aims to develop techniques that complement and empower human users. By contrast, the vast majority of algorithms developed in machine learning aim to replace human users in data exploitation. In this paper we describe a recently introduced machine learning problem, called rare category detection, which may be a better match to visual analytic environments. We describe a new designmore » criteria for this problem, and present comparisons to existing techniques with both synthetic and real-world datasets. We conclude by describing an application in broad-area search of remote sensing imagery.« less

  18. Mapping multisensory parietal face and body areas in humans.

    PubMed

    Huang, Ruey-Song; Chen, Ching-fu; Tran, Alyssa T; Holstein, Katie L; Sereno, Martin I

    2012-10-30

    Detection and avoidance of impending obstacles is crucial to preventing head and body injuries in daily life. To safely avoid obstacles, locations of objects approaching the body surface are usually detected via the visual system and then used by the motor system to guide defensive movements. Mediating between visual input and motor output, the posterior parietal cortex plays an important role in integrating multisensory information in peripersonal space. We used functional MRI to map parietal areas that see and feel multisensory stimuli near or on the face and body. Tactile experiments using full-body air-puff stimulation suits revealed somatotopic areas of the face and multiple body parts forming a higher-level homunculus in the superior posterior parietal cortex. Visual experiments using wide-field looming stimuli revealed retinotopic maps that overlap with the parietal face and body areas in the postcentral sulcus at the most anterior border of the dorsal visual pathway. Starting at the parietal face area and moving medially and posteriorly into the lower-body areas, the median of visual polar-angle representations in these somatotopic areas gradually shifts from near the horizontal meridian into the lower visual field. These results suggest the parietal face and body areas fuse multisensory information in peripersonal space to guard an individual from head to toe.

  19. Application of soil in forensic science: residual odor and HRD dogs.

    PubMed

    Alexander, Michael B; Hodges, Theresa K; Bytheway, Joan; Aitkenhead-Peterson, Jacqueline A

    2015-04-01

    Decomposing human remains alter the environment through deposition of various compounds comprised of a variety of chemical constituents. Human remains detection (HRD) dogs are trained to indicate the odor of human remains. Residual odor from previously decomposing human remains may remain in the soil and on surfaces long after the remains are gone. This study examined the ability of eight nationally certified HRD dogs (four dual purpose and four single purpose) to detect human remains odor in soil from under decomposing human remains as well as soils which no longer contained human remains, soils which had been cold water extracted and even the extraction fluid itself. The HRD dogs were able to detect the odor of human remains successfully above the level of chance for each soil ranging between 75% and 100% accurate up to 667 days post body removal from soil surface. No significant performance accuracy was found between the dual and single purpose dogs. This finding indicates that even though there may not be anything visually observable to the human eye, residual odor of human remains in soil can be very recalcitrant and therefore detectible by properly trained and credentialed HRD dogs. Further research is warranted to determine the parameters of the HRD dogs capabilities and in determining exactly what they are smelling. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Autofluorescence multiphoton microscopy for visualization of tissue morphology and cellular dynamics in murine and human airways.

    PubMed

    Kretschmer, Sarah; Pieper, Mario; Hüttmann, Gereon; Bölke, Torsten; Wollenberg, Barbara; Marsh, Leigh M; Garn, Holger; König, Peter

    2016-08-01

    The basic understanding of inflammatory airway diseases greatly benefits from imaging the cellular dynamics of immune cells. Current imaging approaches focus on labeling specific cells to follow their dynamics but fail to visualize the surrounding tissue. To overcome this problem, we evaluated autofluorescence multiphoton microscopy for following the motion and interaction of cells in the airways in the context of tissue morphology. Freshly isolated murine tracheae from healthy mice and mice with experimental allergic airway inflammation were examined by autofluorescence multiphoton microscopy. In addition, fluorescently labeled ovalbumin and fluorophore-labeled antibodies were applied to visualize antigen uptake and to identify specific cell populations, respectively. The trachea in living mice was imaged to verify that the ex vivo preparation reflects the in vivo situation. Autofluorescence multiphoton microscopy was also tested to examine human tissue from patients in short-term tissue culture. Using autofluorescence, the epithelium, underlying cells, and fibers of the connective tissue, as well as blood vessels, were identified in isolated tracheae. Similar structures were visualized in living mice and in the human airway tissue. In explanted murine airways, mobile cells were localized within the tissue and we could follow their migration, interactions between individual cells, and their phagocytic activity. During allergic airway inflammation, increased number of eosinophil and neutrophil granulocytes were detected that moved within the connective tissue and immediately below the epithelium without damaging the epithelial cells or connective tissues. Contacts between granulocytes were transient lasting 3 min on average. Unexpectedly, prolonged interactions between granulocytes and antigen-uptaking cells were observed lasting for an average of 13 min. Our results indicate that autofluorescence-based imaging can detect previously unknown immune cell interactions in the airways. The method also holds the potential to be used during diagnostic procedures in humans if integrated into a bronchoscope.

  1. Highly sensitive on-site detection of glucose in human urine with naked eye based on enzymatic-like reaction mediated etching of gold nanorods.

    PubMed

    Zhang, Zhiyang; Chen, Zhaopeng; Cheng, Fangbin; Zhang, Yaowen; Chen, Lingxin

    2017-03-15

    Based on enzymatic-like reaction mediated etching of gold nanorods (GNRs), an ultrasensitive visual method was developed for on-site detection of urine glucose. With the catalysis of MoO 4 2 - , GNRs were efficiently etched by H 2 O 2 which was generated by glucose-glucose oxidase enzymatic reaction. The etching of GNRs lead to a blue-shift of logitudinal localized surface plasmon resonance of GNRs, accompanied by an obvious color change from blue to red. The peak-shift and the color change can be used for detection of glucose by the spectrophotometer and the naked eyes. Under optimal condition, an excellent sensitivity toward glucose is obtained with a detection limit of 0.1μM and a visual detection limit of 3μM in buffer solution. Benefiting from the high sensitivity, the successful colorimetric detection of glucose in original urine samples was achieved, which indicates the practical applicability to the on-site determination of urine glucose. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Perceptual evaluation of visual alerts in surveillance videos

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Topkara, Mercan; Pfeiffer, William; Hampapur, Arun

    2015-03-01

    Visual alerts are commonly used in video monitoring and surveillance systems to mark events, presumably making them more salient to human observers. Surprisingly, the effectiveness of computer-generated alerts in improving human performance has not been widely studied. To address this gap, we have developed a tool for simulating different alert parameters in a realistic visual monitoring situation, and have measured human detection performance under conditions that emulated different set-points in a surveillance algorithm. In the High-Sensitivity condition, the simulated alerts identified 100% of the events with many false alarms. In the Lower-Sensitivity condition, the simulated alerts correctly identified 70% of the targets, with fewer false alarms. In the control condition, no simulated alerts were provided. To explore the effects of learning, subjects performed these tasks in three sessions, on separate days, in a counterbalanced, within subject design. We explore these results within the context of cognitive models of human attention and learning. We found that human observers were more likely to respond to events when marked by a visual alert. Learning played a major role in the two alert conditions. In the first session, observers generated almost twice as many False Alarms as in the No-Alert condition, as the observers responded pre-attentively to the computer-generated false alarms. However, this rate dropped equally dramatically in later sessions, as observers learned to discount the false cues. Highest observer Precision, Hits/(Hits + False Alarms), was achieved in the High Sensitivity condition, but only after training. The successful evaluation of surveillance systems depends on understanding human attention and performance.

  3. Rapid detection of fumonisin B1 using a colloidal gold immunoassay strip test in corn samples.

    PubMed

    Ling, Sumei; Wang, Rongzhi; Gu, Xiaosong; Wen, Can; Chen, Lingling; Chen, Zhibin; Chen, Qing-Ai; Xiao, Shiwei; Yang, Yanling; Zhuang, Zhenhong; Wang, Shihua

    2015-12-15

    Fumonisin B1 (FB1) is the most common and highest toxic of fumonisins species, exists frequently in corn and corn-based foods, leading to several animal and human diseases. Furthermore, FB1 was reported that it was associated with the human esophageal cancer. In view of the harmful of FB1, it is urgent to develop a feasible and accuracy method for rapid detection of FB1. In this study, a competitive immunoassay for FB1 detection was developed based on colloidal gold-antibody conjugate. The FB1-keyhole limpet hemoeyanin (FB1-KLH) conjugate was embedded in the test line, and goat anti-mouse IgG antibody embedded in the control line. The color density of the test line correlated with the concentration of FB1 in the range from 2.5 to 10 ng/mL, and the visual limit detection of test for FB1 was 2.5 ng/mL. The results indicated that the test strip is specific for FB1, and no cross-reactivity to other toxins. The quantitative detection for FB1 was simple, only needing one step without complicated assay performance and expensive equipment, and the total time of visual evaluation was less than 5 min. Hence, the developed colloidal gold-antibody assay can be used as a feasible method for FB1 rapid and quantitative detection in corn samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Developmental plasticity in vision and behavior may help guppies overcome increased turbidity.

    PubMed

    Ehlman, Sean M; Sandkam, Benjamin A; Breden, Felix; Sih, Andrew

    2015-12-01

    Increasing turbidity in streams and rivers near human activity is cause for environmental concern, as the ability of aquatic organisms to use visual information declines. To investigate how some organisms might be able to developmentally compensate for increasing turbidity, we reared guppies (Poecilia reticulata) in either clear or turbid water. We assessed the effects of developmental treatments on adult behavior and aspects of the visual system by testing fish from both developmental treatments in turbid and clear water. We found a strong interactive effect of rearing and assay conditions: fish reared in clear water tended to decrease activity in turbid water, whereas fish reared in turbid water tended to increase activity in turbid water. Guppies from all treatments decreased activity when exposed to a predator. To measure plasticity in the visual system, we quantified treatment differences in opsin gene expression of individuals. We detected a shift from mid-wave-sensitive opsins to long wave-sensitive opsins for guppies reared in turbid water. Since long-wavelength sensitivity is important in motion detection, this shift likely allows guppies to salvage motion-detecting abilities when visual information is obscured in turbid water. Our results demonstrate the importance of developmental plasticity in responses of organisms to rapidly changing environments.

  5. Detection of fecal contamination on beef meat surfaces using handheld fluorescence imaging device (HFID)

    NASA Astrophysics Data System (ADS)

    Oh, Mirae; Lee, Hoonsoo; Cho, Hyunjeong; Moon, Sang-Ho; Kim, Eun-Kyung; Kim, Moon S.

    2016-05-01

    Current meat inspection in slaughter plants, for food safety and quality attributes including potential fecal contamination, is conducted through by visual examination human inspectors. A handheld fluorescence-based imaging device (HFID) was developed to be an assistive tool for human inspectors by highlighting contaminated food and food contact surfaces on a display monitor. It can be used under ambient lighting conditions in food processing plants. Critical components of the imaging device includes four 405-nm 10-W LEDs for fluorescence excitation, a charge-coupled device (CCD) camera, optical filter (670 nm used for this study), and Wi-Fi transmitter for broadcasting real-time video/images to monitoring devices such as smartphone and tablet. This study aimed to investigate the effectiveness of HFID in enhancing visual detection of fecal contamination on red meat, fat, and bone surfaces of beef under varying ambient luminous intensities (0, 10, 30, 50 and 70 foot-candles). Overall, diluted feces on fat, red meat and bone areas of beef surfaces were detectable in the 670-nm single-band fluorescence images when using the HFID under 0 to 50 foot-candle ambient lighting.

  6. Visual Literacy in Preservice Teachers: a Case Study in Biology

    NASA Astrophysics Data System (ADS)

    Ruiz-Gallardo, José Reyes; García Fernández, Beatriz; Mateos Jiménez, Antonio

    2017-07-01

    In this study, we explore the competence of preservice teachers (n = 161) in labelling and creating new cross-sectional human diagrams, based on anatomy knowledge depicted in longitudinal sections. Using educational standards to assess visual literacy and ad hoc open questions, results indicate limited skills for both tasks. However, their competence is particularly poor creating diagrams, where shortcomings were observed not only in visual literacy but in content knowledge. We discuss the misconceptions detected during these assessments. Visual literacy training should be strengthened for these students, as it is a skill that is especially important for future teachers to use in learning, assessing, and reflecting on content in science education. This is particularly important in preservice teachers since they should be fluent in the use of visual teaching tools in teaching anatomy and other content in the biology curriculum.

  7. Perception of the average size of multiple objects in chimpanzees (Pan troglodytes).

    PubMed

    Imura, Tomoko; Kawakami, Fumito; Shirai, Nobu; Tomonaga, Masaki

    2017-08-30

    Humans can extract statistical information, such as the average size of a group of objects or the general emotion of faces in a crowd without paying attention to any individual object or face. To determine whether summary perception is unique to humans, we investigated the evolutional origins of this ability by assessing whether chimpanzees, which are closely related to humans, can also determine the average size of multiple visual objects. Five chimpanzees and 18 humans were able to choose the array in which the average size was larger, when presented with a pair of arrays, each containing 12 circles of different or the same sizes. Furthermore, both species were more accurate in judging the average size of arrays consisting of 12 circles of different or the same sizes than they were in judging the average size of arrays consisting of a single circle. Our findings could not be explained by the use of a strategy in which the chimpanzee detected the largest or smallest circle among those in the array. Our study provides the first evidence that chimpanzees can perceive the average size of multiple visual objects. This indicates that the ability to compute the statistical properties of a complex visual scene is not unique to humans, but is shared between both species. © 2017 The Authors.

  8. Perception of the average size of multiple objects in chimpanzees (Pan troglodytes)

    PubMed Central

    Kawakami, Fumito; Shirai, Nobu; Tomonaga, Masaki

    2017-01-01

    Humans can extract statistical information, such as the average size of a group of objects or the general emotion of faces in a crowd without paying attention to any individual object or face. To determine whether summary perception is unique to humans, we investigated the evolutional origins of this ability by assessing whether chimpanzees, which are closely related to humans, can also determine the average size of multiple visual objects. Five chimpanzees and 18 humans were able to choose the array in which the average size was larger, when presented with a pair of arrays, each containing 12 circles of different or the same sizes. Furthermore, both species were more accurate in judging the average size of arrays consisting of 12 circles of different or the same sizes than they were in judging the average size of arrays consisting of a single circle. Our findings could not be explained by the use of a strategy in which the chimpanzee detected the largest or smallest circle among those in the array. Our study provides the first evidence that chimpanzees can perceive the average size of multiple visual objects. This indicates that the ability to compute the statistical properties of a complex visual scene is not unique to humans, but is shared between both species. PMID:28835550

  9. Audiovisual Asynchrony Detection in Human Speech

    ERIC Educational Resources Information Center

    Maier, Joost X.; Di Luca, Massimiliano; Noppeney, Uta

    2011-01-01

    Combining information from the visual and auditory senses can greatly enhance intelligibility of natural speech. Integration of audiovisual speech signals is robust even when temporal offsets are present between the component signals. In the present study, we characterized the temporal integration window for speech and nonspeech stimuli with…

  10. Probe Scanning Support System by a Parallel Mechanism for Robotic Echography

    NASA Astrophysics Data System (ADS)

    Aoki, Yusuke; Kaneko, Kenta; Oyamada, Masami; Takachi, Yuuki; Masuda, Kohji

    We propose a probe scanning support system based on force/visual servoing control for robotic echography. First, we have designed and formulated its inverse kinematics the construction of mechanism. Next, we have developed a scanning method of the ultrasound probe on body surface to construct visual servo system based on acquired echogram by the standalone medical robot to move the ultrasound probe on patient abdomen in three-dimension. The visual servo system detects local change of brightness in time series echogram, which is stabilized the position of the probe by conventional force servo system in the robot, to compensate not only periodical respiration motion but also body motion. Then we integrated control method of the visual servo with the force servo as a hybrid control in both of position and force. To confirm the ability to apply for actual abdomen, we experimented the total system to follow the gallbladder as a moving target to keep its position in the echogram by minimizing variation of reaction force on abdomen. As the result, the system has a potential to be applied to automatic detection of human internal organ.

  11. Visual-search models for location-known detection tasks

    NASA Astrophysics Data System (ADS)

    Gifford, H. C.; Karbaschi, Z.; Banerjee, K.; Das, M.

    2017-03-01

    Lesion-detection studies that analyze a fixed target position are generally considered predictive of studies involving lesion search, but the extent of the correlation often goes untested. The purpose of this work was to develop a visual-search (VS) model observer for location-known tasks that, coupled with previous work on localization tasks, would allow efficient same-observer assessments of how search and other task variations can alter study outcomes. The model observer featured adjustable parameters to control the search radius around the fixed lesion location and the minimum separation between suspicious locations. Comparisons were made against human observers, a channelized Hotelling observer and a nonprewhitening observer with eye filter in a two-alternative forced-choice study with simulated lumpy background images containing stationary anatomical and quantum noise. These images modeled single-pinhole nuclear medicine scans with different pinhole sizes. When the VS observer's search radius was optimized with training images, close agreement was obtained with human-observer results. Some performance differences between the humans could be explained by varying the model observer's separation parameter. The range of optimal pinhole sizes identified by the VS observer was in agreement with the range determined with the channelized Hotelling observer.

  12. Assessment of prostate cancer detection with a visual-search human model observer

    NASA Astrophysics Data System (ADS)

    Sen, Anando; Kalantari, Faraz; Gifford, Howard C.

    2014-03-01

    Early staging of prostate cancer (PC) is a significant challenge, in part because of the small tumor sizes in- volved. Our long-term goal is to determine realistic diagnostic task performance benchmarks for standard PC imaging with single photon emission computed tomography (SPECT). This paper reports on a localization receiver operator characteristic (LROC) validation study comparing human and model observers. The study made use of a digital anthropomorphic phantom and one-cm tumors within the prostate and pelvic lymph nodes. Uptake values were consistent with data obtained from clinical In-111 ProstaScint scans. The SPECT simulation modeled a parallel-hole imaging geometry with medium-energy collimators. Nonuniform attenua- tion and distance-dependent detector response were accounted for both in the imaging and the ordered-subset expectation-maximization (OSEM) iterative reconstruction. The observer study made use of 2D slices extracted from reconstructed volumes. All observers were informed about the prostate and nodal locations in an image. Iteration number and the level of postreconstruction smoothing were study parameters. The results show that a visual-search (VS) model observer correlates better with the average detection performance of human observers than does a scanning channelized nonprewhitening (CNPW) model observer.

  13. Ultrafast scene detection and recognition with limited visual information

    PubMed Central

    Hagmann, Carl Erick; Potter, Mary C.

    2016-01-01

    Humans can detect target color pictures of scenes depicting concepts like picnic or harbor in sequences of six or twelve pictures presented as briefly as 13 ms, even when the target is named after the sequence (Potter, Wyble, Hagmann, & McCourt, 2014). Such rapid detection suggests that feedforward processing alone enabled detection without recurrent cortical feedback. There is debate about whether coarse, global, low spatial frequencies (LSFs) provide predictive information to high cortical levels through the rapid magnocellular (M) projection of the visual path, enabling top-down prediction of possible object identities. To test the “Fast M” hypothesis, we compared detection of a named target across five stimulus conditions: unaltered color, blurred color, grayscale, thresholded monochrome, and LSF pictures. The pictures were presented for 13–80 ms in six-picture rapid serial visual presentation (RSVP) sequences. Blurred, monochrome, and LSF pictures were detected less accurately than normal color or grayscale pictures. When the target was named before the sequence, all picture types except LSF resulted in above-chance detection at all durations. Crucially, when the name was given only after the sequence, performance dropped and the monochrome and LSF pictures (but not the blurred pictures) were at or near chance. Thus, without advance information, monochrome and LSF pictures were rarely understood. The results offer only limited support for the Fast M hypothesis, suggesting instead that feedforward processing is able to activate conceptual representations without complementary reentrant processing. PMID:28255263

  14. Posterior parietal cortex mediates encoding and maintenance processes in change blindness.

    PubMed

    Tseng, Philip; Hsu, Tzu-Yu; Muggleton, Neil G; Tzeng, Ovid J L; Hung, Daisy L; Juan, Chi-Hung

    2010-03-01

    It is commonly accepted that right posterior parietal cortex (PPC) plays an important role in updating spatial representations, directing visuospatial attention, and planning actions. However, recent studies suggest that right PPC may also be involved in processes that are more closely associated with our visual awareness as its activation level positively correlates with successful conscious change detection (Beck, D.M., Rees, G., Frith, C.D., & Lavie, N. (2001). Neural correlates of change detection and change blindness. Nature Neuroscience, 4, 645-650.). Furthermore, disruption of its activity increases the occurrences of change blindness, thus suggesting a causal role for right PPC in change detection (Beck, D.M., Muggleton, N., Walsh, V., & Lavie, N. (2006). Right parietal cortex plays a critical role in change blindness. Cerebral Cortex, 16, 712-717.). In the context of a 1-shot change detection paradigm, we applied transcranial magnetic stimulation (TMS) during different time intervals to elucidate the temporally precise involvement of PPC in change detection. While subjects attempted to detect changes between two image sets separated by a brief time interval, TMS was applied either during the presentation of picture 1 when subjects were encoding and maintaining information into visual short-term memory, or picture 2 when subjects were retrieving information relating to picture 1 and comparing it to picture 2. Our results show that change blindness occurred more often when TMS was applied during the viewing of picture 1, which implies that right PPC plays a crucial role in the processes of encoding and maintaining information in visual short-term memory. In addition, since our stimuli did not involve changes in spatial locations, our findings also support previous studies suggesting that PPC may be involved in the processes of encoding non-spatial visual information (Todd, J.J. & Marois, R. (2004). Capacity limit of visual short-term memory in human posterior parietal cortex. Nature, 428, 751-754.). Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  15. A contrast-sensitive channelized-Hotelling observer to predict human performance in a detection task using lumpy backgrounds and Gaussian signals

    NASA Astrophysics Data System (ADS)

    Park, Subok; Badano, Aldo; Gallas, Brandon D.; Myers, Kyle J.

    2007-03-01

    Previously, a non-prewhitening matched filter (NPWMF) incorporating a model for the contrast sensitivity of the human visual system was introduced for modeling human performance in detection tasks with different viewing angles and white-noise backgrounds by Badano et al. But NPWMF observers do not perform well detection tasks involving complex backgrounds since they do not account for random backgrounds. A channelized-Hotelling observer (CHO) using difference-of-Gaussians (DOG) channels has been shown to track human performance well in detection tasks using lumpy backgrounds. In this work, a CHO with DOG channels, incorporating the model of the human contrast sensitivity, was developed similarly. We call this new observer a contrast-sensitive CHO (CS-CHO). The Barten model was the basis of our human contrast sensitivity model. A scalar was multiplied to the Barten model and varied to control the thresholding effect of the contrast sensitivity on luminance-valued images and hence the performance-prediction ability of the CS-CHO. The performance of the CS-CHO was compared to the average human performance from the psychophysical study by Park et al., where the task was to detect a known Gaussian signal in non-Gaussian distributed lumpy backgrounds. Six different signal-intensity values were used in this study. We chose the free parameter of our model to match the mean human performance in the detection experiment at the strongest signal intensity. Then we compared the model to the human at five different signal-intensity values in order to see if the performance of the CS-CHO matched human performance. Our results indicate that the CS-CHO with the chosen scalar for the contrast sensitivity predicts human performance closely as a function of signal intensity.

  16. Fluctuation scaling in the visual cortex at threshold

    NASA Astrophysics Data System (ADS)

    Medina, José M.; Díaz, José A.

    2016-05-01

    Fluctuation scaling relates trial-to-trial variability to the average response by a power function in many physical processes. Here we address whether fluctuation scaling holds in sensory psychophysics and its functional role in visual processing. We report experimental evidence of fluctuation scaling in human color vision and form perception at threshold. Subjects detected thresholds in a psychophysical masking experiment that is considered a standard reference for studying suppression between neurons in the visual cortex. For all subjects, the analysis of threshold variability that results from the masking task indicates that fluctuation scaling is a global property that modulates detection thresholds with a scaling exponent that departs from 2, β =2.48 ±0.07 . We also examine a generalized version of fluctuation scaling between the sample kurtosis K and the sample skewness S of threshold distributions. We find that K and S are related and follow a unique quadratic form K =(1.19 ±0.04 ) S2+(2.68 ±0.06 ) that departs from the expected 4/3 power function regime. A random multiplicative process with weak additive noise is proposed based on a Langevin-type equation. The multiplicative process provides a unifying description of fluctuation scaling and the quadratic S -K relation and is related to on-off intermittency in sensory perception. Our findings provide an insight into how the human visual system interacts with the external environment. The theoretical methods open perspectives for investigating fluctuation scaling and intermittency effects in a wide variety of natural, economic, and cognitive phenomena.

  17. Build a Robust Learning Feature Descriptor by Using a New Image Visualization Method for Indoor Scenario Recognition

    PubMed Central

    Wang, Xin; Deng, Zhongliang

    2017-01-01

    In order to recognize indoor scenarios, we extract image features for detecting objects, however, computers can make some unexpected mistakes. After visualizing the histogram of oriented gradient (HOG) features, we find that the world through the eyes of a computer is indeed different from human eyes, which assists researchers to see the reasons that cause a computer to make errors. Additionally, according to the visualization, we notice that the HOG features can obtain rich texture information. However, a large amount of background interference is also introduced. In order to enhance the robustness of the HOG feature, we propose an improved method for suppressing the background interference. On the basis of the original HOG feature, we introduce a principal component analysis (PCA) to extract the principal components of the image colour information. Then, a new hybrid feature descriptor, which is named HOG–PCA (HOGP), is made by deeply fusing these two features. Finally, the HOGP is compared to the state-of-the-art HOG feature descriptor in four scenes under different illumination. In the simulation and experimental tests, the qualitative and quantitative assessments indicate that the visualizing images of the HOGP feature are close to the observation results obtained by human eyes, which is better than the original HOG feature for object detection. Furthermore, the runtime of our proposed algorithm is hardly increased in comparison to the classic HOG feature. PMID:28677635

  18. Selective attention to task-irrelevant emotional distractors is unaffected by the perceptual load associated with a foreground task.

    PubMed

    Hindi Attar, Catherine; Müller, Matthias M

    2012-01-01

    A number of studies have shown that emotionally arousing stimuli are preferentially processed in the human brain. Whether or not this preference persists under increased perceptual load associated with a task at hand remains an open question. Here we manipulated two possible determinants of the attentional selection process, perceptual load associated with a foreground task and the emotional valence of concurrently presented task-irrelevant distractors. As a direct measure of sustained attentional resource allocation in early visual cortex we used steady-state visual evoked potentials (SSVEPs) elicited by distinct flicker frequencies of task and distractor stimuli. Subjects either performed a detection (low load) or discrimination (high load) task at a centrally presented symbol stream that flickered at 8.6 Hz while task-irrelevant neutral or unpleasant pictures from the International Affective Picture System (IAPS) flickered at a frequency of 12 Hz in the background of the stream. As reflected in target detection rates and SSVEP amplitudes to both task and distractor stimuli, unpleasant relative to neutral background pictures more strongly withdrew processing resources from the foreground task. Importantly, this finding was unaffected by the factor 'load' which turned out to be a weak modulator of attentional processing in human visual cortex.

  19. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction

    PubMed Central

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research. PMID:29599739

  20. Illusory Motion Reproduced by Deep Neural Networks Trained for Prediction.

    PubMed

    Watanabe, Eiji; Kitaoka, Akiyoshi; Sakamoto, Kiwako; Yasugi, Masaki; Tanaka, Kenta

    2018-01-01

    The cerebral cortex predicts visual motion to adapt human behavior to surrounding objects moving in real time. Although the underlying mechanisms are still unknown, predictive coding is one of the leading theories. Predictive coding assumes that the brain's internal models (which are acquired through learning) predict the visual world at all times and that errors between the prediction and the actual sensory input further refine the internal models. In the past year, deep neural networks based on predictive coding were reported for a video prediction machine called PredNet. If the theory substantially reproduces the visual information processing of the cerebral cortex, then PredNet can be expected to represent the human visual perception of motion. In this study, PredNet was trained with natural scene videos of the self-motion of the viewer, and the motion prediction ability of the obtained computer model was verified using unlearned videos. We found that the computer model accurately predicted the magnitude and direction of motion of a rotating propeller in unlearned videos. Surprisingly, it also represented the rotational motion for illusion images that were not moving physically, much like human visual perception. While the trained network accurately reproduced the direction of illusory rotation, it did not detect motion components in negative control pictures wherein people do not perceive illusory motion. This research supports the exciting idea that the mechanism assumed by the predictive coding theory is one of basis of motion illusion generation. Using sensory illusions as indicators of human perception, deep neural networks are expected to contribute significantly to the development of brain research.

  1. Noninvasive cross-sectional visualization of enamel cracks by optical coherence tomography in vitro.

    PubMed

    Imai, Kanako; Shimada, Yasushi; Sadr, Alireza; Sumi, Yasunori; Tagami, Junji

    2012-09-01

    Current methods for the detection of enamel cracks are not very sensitive. Optical coherence tomography (OCT) is a promising diagnostic method for creating cross-sectional imaging of internal biological structures by measuring echoes of backscattered light. In this study, swept-source OCT (SS-OCT), a variant of OCT that sweeps the near-infrared wavelength at a rate of 30 kHz over a span of 110 nm centered at 1,330 nm, was examined as a diagnostic tool for enamel cracks. Twenty extracted human teeth were visually evaluated without magnification. SS-OCT was conducted on locations in which the presence of an enamel crack was suspected under visual inspection using a photocuring unit as transillumination. The teeth were then sectioned with a diamond saw and directly viewed under a confocal laser scanning microscope (CLSM). Using SS-OCT, the presence and extent of enamel cracks were clearly visualized on images based on backscattering signals. The extension of enamel cracks beyond the dentinoenamel junction could also be confirmed. The diagnostic accuracy of SS-OCT was shown to be superior to that of conventional visual inspection--the area under the receiver operating characteristic curve--for the detection of enamel crack and whole-thickness enamel crack; visual inspection: 0.69 and 0.56, SS-OCT: 0.85 and 0.77, respectively). Enamel cracks can be clearly detected because of increased backscattering of light matching the location of the crack, and the results correlated well with those from the CLSM. Copyright © 2012 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  2. Dual wavelength imaging allows analysis of membrane fusion of influenza virus inside cells.

    PubMed

    Sakai, Tatsuya; Ohuchi, Masanobu; Imai, Masaki; Mizuno, Takafumi; Kawasaki, Kazunori; Kuroda, Kazumichi; Yamashina, Shohei

    2006-02-01

    Influenza virus hemagglutinin (HA) is a determinant of virus infectivity. Therefore, it is important to determine whether HA of a new influenza virus, which can potentially cause pandemics, is functional against human cells. The novel imaging technique reported here allows rapid analysis of HA function by visualizing viral fusion inside cells. This imaging was designed to detect fusion changing the spectrum of the fluorescence-labeled virus. Using this imaging, we detected the fusion between a virus and a very small endosome that could not be detected previously, indicating that the imaging allows highly sensitive detection of viral fusion.

  3. An automatic eye detection and tracking technique for stereo video sequences

    NASA Astrophysics Data System (ADS)

    Paduru, Anirudh; Charalampidis, Dimitrios; Fouts, Brandon; Jovanovich, Kim

    2009-05-01

    Human-computer interfacing (HCI) describes a system or process with which two information processors, namely a human and a computer, attempt to exchange information. Computer-to-human (CtH) information transfer has been relatively effective through visual displays and sound devices. On the other hand, the human-tocomputer (HtC) interfacing avenue has yet to reach its full potential. For instance, the most common HtC communication means are the keyboard and mouse, which are already becoming a bottleneck in the effective transfer of information. The solution to the problem is the development of algorithms that allow the computer to understand human intentions based on their facial expressions, head motion patterns, and speech. In this work, we are investigating the feasibility of a stereo system to effectively determine the head position, including the head rotation angles, based on the detection of eye pupils.

  4. Real-time biscuit tile image segmentation method based on edge detection.

    PubMed

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter

    2018-05-01

    In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Brain dopamine and serotonin transporter binding are associated with visual attention bias for food in lean men.

    PubMed

    Koopman, K E; Roefs, A; Elbers, D C E; Fliers, E; Booij, J; Serlie, M J; la Fleur, S E

    2016-06-01

    In rodents, the striatal dopamine (DA) system and the (hypo)thalamic serotonin (5-HT) system are involved in the regulation of feeding behavior. In lean humans, little is known about the relationship between these brain neurotransmitter systems and feeding. We studied the relationship between striatal DA transporters (DAT) and diencephalic 5-HT transporters (SERT), behavioral tasks and questionnaires, and food intake. We measured striatal DAT and diencephalic SERT binding with [123I]FP-CIT SPECT in 36 lean male subjects. Visual attention bias for food (detection speed and distraction time) and degree of impulsivity were measured using response-latency-based computer tasks. Craving and emotional eating were assessed with questionnaires and ratings of hunger by means of VAS scores. Food intake was assessed through a self-reported online diet journal. Striatal DAT and diencephalic SERT binding negatively correlated with food detection speed (p = 0.008, r = -0.50 and p = 0.002, r = -0.57, respectively), but not with food distraction time, ratings of hunger, craving or impulsivity. Striatal DAT and diencephalic SERT binding did not correlate with free choice food intake, whereas food detection speed positively correlated with total caloric intake (p = 0.001, r = 0.60), protein intake (p = 0.01, r = 0.44), carbohydrate intake (p = 0.03, r = 0.39) and fat intake (p = 0.06, r = 0.35). These results indicate a role for the central 5-HT and DA system in the regulation of visual attention bias for food, which contributes to the motivation to eat, in non-obese, healthy humans. In addition, this study confirms that food detection speed, measured with the latency-based computer task, positively correlates with total food and macronutrient intake.

  6. Comprehensive visual field test & diagnosis system in support of astronaut health and performance

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; Clark, Jonathan B.; Reisman, Garrett E.; Tarbell, Mark A.

    Long duration spaceflight, permanent human presence on the Moon, and future human missions to Mars will require autonomous medical care to address both expected and unexpected risks. An integrated non-invasive visual field test & diagnosis system is presented for the identification, characterization, and automated classification of visual field defects caused by the spaceflight environment. This system will support the onboard medical provider and astronauts on space missions with an innovative, non-invasive, accurate, sensitive, and fast visual field test. It includes a database for examination data, and a software package for automated visual field analysis and diagnosis. The system will be used to detect and diagnose conditions affecting the visual field, while in space and on Earth, permitting the timely application of therapeutic countermeasures before astronaut health or performance are impaired. State-of-the-art perimetry devices are bulky, thereby precluding application in a spaceflight setting. In contrast, the visual field test & diagnosis system requires only a touchscreen-equipped computer or touchpad device, which may already be in use for other purposes (i.e., no additional payload), and custom software. The system has application in routine astronaut assessment (Clinical Status Exam), pre-, in-, and post-flight monitoring, and astronaut selection. It is deployable in operational space environments, such as aboard the International Space Station or during future missions to or permanent presence on the Moon and Mars.

  7. Masking disrupts reentrant processing in human visual cortex.

    PubMed

    Fahrenfort, J J; Scholte, H S; Lamme, V A F

    2007-09-01

    In masking, a stimulus is rendered invisible through the presentation of a second stimulus shortly after the first. Over the years, authors have typically explained masking by postulating some early disruption process. In these feedforward-type explanations, the mask somehow "catches up" with the target stimulus, disrupting its processing either through lateral or interchannel inhibition. However, studies from recent years indicate that visual perception--and most notably visual awareness itself--may depend strongly on cortico-cortical feedback connections from higher to lower visual areas. This has led some researchers to propose that masking derives its effectiveness from selectively interrupting these reentrant processes. In this experiment, we used electroencephalogram measurements to determine what happens in the human visual cortex during detection of a texture-defined square under nonmasked (seen) and masked (unseen) conditions. Electro-encephalogram derivatives that are typically associated with reentrant processing turn out to be absent in the masked condition. Moreover, extrastriate visual areas are still activated early on by both seen and unseen stimuli, as shown by scalp surface Laplacian current source-density maps. This conclusively shows that feedforward processing is preserved, even when subject performance is at chance as determined by objective measures. From these results, we conclude that masking derives its effectiveness, at least partly, from disrupting reentrant processing, thereby interfering with the neural mechanisms of figure-ground segmentation and visual awareness itself.

  8. Nature as a model for biomimetic sensors

    NASA Astrophysics Data System (ADS)

    Bleckmann, H.

    2012-04-01

    Mammals, like humans, rely mainly on acoustic, visual and olfactory information. In addition, most also use tactile and thermal cues for object identification and spatial orientation. Most non-mammalian animals also possess a visual, acoustic and olfactory system. However, besides these systems they have developed a large variety of highly specialized sensors. For instance, pyrophilous insects use infrared organs for the detection of forest fires while boas, pythons and pit vipers sense the infrared radiation emitted by prey animals. All cartilaginous and bony fishes as well as some amphibians have a mechnaosensory lateral line. It is used for the detection of weak water motions and pressure gradients. For object detection and spatial orientation many species of nocturnal fish employ active electrolocation. This review describes certain aspects of the detection and processing of infrared, mechano- and electrosensory information. It will be shown that the study of these seemingly exotic sensory systems can lead to discoveries that are useful for the construction of technical sensors and artificial control systems.

  9. Stimulus information contaminates summation tests of independent neural representations of features

    NASA Technical Reports Server (NTRS)

    Shimozaki, Steven S.; Eckstein, Miguel P.; Abbey, Craig K.

    2002-01-01

    Many models of visual processing assume that visual information is analyzed into separable and independent neural codes, or features. A common psychophysical test of independent features is known as a summation study, which measures performance in a detection, discrimination, or visual search task as the number of proposed features increases. Improvement in human performance with increasing number of available features is typically attributed to the summation, or combination, of information across independent neural coding of the features. In many instances, however, increasing the number of available features also increases the stimulus information in the task, as assessed by an optimal observer that does not include the independent neural codes. In a visual search task with spatial frequency and orientation as the component features, a particular set of stimuli were chosen so that all searches had equivalent stimulus information, regardless of the number of features. In this case, human performance did not improve with increasing number of features, implying that the improvement observed with additional features may be due to stimulus information and not the combination across independent features.

  10. 21 CFR 886.1330 - Amsler grid.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Amsler grid. 886.1330 Section 886.1330 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES... the patient and intended to rapidly detect central and paracentral irregularities in the visual field...

  11. Human Subject Research Protocol: Computer-Aided Human Centric Cyber Situation Awareness: Understanding Cognitive Processes of Cyber Analysts

    DTIC Science & Technology

    2013-11-01

    by existing cyber-attack detection tools far exceeds the analysts’ cognitive capabilities. Grounded in perceptual and cognitive theory , many visual...Processes Inspired by the sense-making theory discussed earlier, we model the analytical reasoning process of cyber analysts using three key...analyst are called “working hypotheses”); each hypothesis could trigger further actions to confirm or disconfirm it. New actions will lead to new

  12. Automatic analysis of the micronucleus test in primary human lymphocytes using image analysis.

    PubMed

    Frieauff, W; Martus, H J; Suter, W; Elhajouji, A

    2013-01-01

    The in vitro micronucleus test (MNT) is a well-established test for early screening of new chemical entities in industrial toxicology. For assessing the clastogenic or aneugenic potential of a test compound, micronucleus induction in cells has been shown repeatedly to be a sensitive and a specific parameter. Various automated systems to replace the tedious and time-consuming visual slide analysis procedure as well as flow cytometric approaches have been discussed. The ROBIAS (Robotic Image Analysis System) for both automatic cytotoxicity assessment and micronucleus detection in human lymphocytes was developed at Novartis where the assay has been used to validate positive results obtained in the MNT in TK6 cells, which serves as the primary screening system for genotoxicity profiling in early drug development. In addition, the in vitro MNT has become an accepted alternative to support clinical studies and will be used for regulatory purposes as well. The comparison of visual with automatic analysis results showed a high degree of concordance for 25 independent experiments conducted for the profiling of 12 compounds. For concentration series of cyclophosphamide and carbendazim, a very good correlation between automatic and visual analysis by two examiners could be established, both for the relative division index used as cytotoxicity parameter, as well as for micronuclei scoring in mono- and binucleated cells. Generally, false-positive micronucleus decisions could be controlled by fast and simple relocation of the automatically detected patterns. The possibility to analyse 24 slides within 65h by automatic analysis over the weekend and the high reproducibility of the results make automatic image processing a powerful tool for the micronucleus analysis in primary human lymphocytes. The automated slide analysis for the MNT in human lymphocytes complements the portfolio of image analysis applications on ROBIAS which is supporting various assays at Novartis.

  13. Multifocal visual evoked potentials reveal normal optic nerve projections in human carriers of oculocutaneous albinism type 1a.

    PubMed

    Hoffmann, Michael B; Wolynski, Barbara; Meltendorf, Synke; Behrens-Baumann, Wolfgang; Käsmann-Kellner, Barbara

    2008-06-01

    In albinism, part of the temporal retina projects abnormally to the contralateral hemisphere. A residual misprojection is also evident in feline carriers that are heterozygous for tyrosinase-related albinism. This study was conducted to test whether such residual abnormalities can also be identified in human carriers of oculocutaneous tyrosinase-related albinism (OCA1a). In eight carriers heterozygous for OCA1a and in eight age- and sex-matched control subjects, monocular pattern-reversal and -onset multifocal visual evoked potentials (mfVEPs) were recorded at 60 locations comprising a visual field of 44 degrees diameter (VERIS 5.01; EDI, San Mateo, CA). For each eye and each stimulus location, interhemispheric difference potentials were calculated and correlated with each other, to assess the lateralization of the responses: positive and negative correlations indicate lateralizations on the same or opposite hemispheres, respectively. Misrouted optic nerves are expected to yield negative interocular correlations. The analysis also allowed for the assessment of the sensitivity and specificity of the detection of projection abnormalities. No significant differences were obtained for the distributions of the interocular correlation coefficients of controls and carriers. Consequently, no local representation abnormalities were observed in the group of OCA1a carriers. For pattern-reversal and -onset stimulation, an assessment of the control data yielded similar specificity (97.9% and 94.6%) and sensitivity (74.4% and 74.8%) estimates for the detection of projection abnormalities. The absence of evidence for projection abnormalities in human OCA1a carriers contrasts with the previously reported evidence for abnormalities in cat-carriers of tyrosinase-related albinism. This discrepancy suggests that animal models of albinism may not provide a match to human albinism.

  14. Perceptual learning through optimization of attentional weighting: human versus optimal Bayesian learner

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Abbey, Craig K.; Pham, Binh T.; Shimozaki, Steven S.

    2004-01-01

    Human performance in visual detection, discrimination, identification, and search tasks typically improves with practice. Psychophysical studies suggest that perceptual learning is mediated by an enhancement in the coding of the signal, and physiological studies suggest that it might be related to the plasticity in the weighting or selection of sensory units coding task relevant information (learning through attention optimization). We propose an experimental paradigm (optimal perceptual learning paradigm) to systematically study the dynamics of perceptual learning in humans by allowing comparisons to that of an optimal Bayesian algorithm and a number of suboptimal learning models. We measured improvement in human localization (eight-alternative forced-choice with feedback) performance of a target randomly sampled from four elongated Gaussian targets with different orientations and polarities and kept as a target for a block of four trials. The results suggest that the human perceptual learning can occur within a lapse of four trials (<1 min) but that human learning is slower and incomplete with respect to the optimal algorithm (23.3% reduction in human efficiency from the 1st-to-4th learning trials). The greatest improvement in human performance, occurring from the 1st-to-2nd learning trial, was also present in the optimal observer, and, thus reflects a property inherent to the visual task and not a property particular to the human perceptual learning mechanism. One notable source of human inefficiency is that, unlike the ideal observer, human learning relies more heavily on previous decisions than on the provided feedback, resulting in no human learning on trials following a previous incorrect localization decision. Finally, the proposed theory and paradigm provide a flexible framework for future studies to evaluate the optimality of human learning of other visual cues and/or sensory modalities.

  15. Research on measurement of aviation magneto ignition strength and balance

    NASA Astrophysics Data System (ADS)

    Gao, Feng; He, Zhixiang; Zhang, Dingpeng

    2017-12-01

    Aviation magneto ignition system failure accounted for two-thirds of the total fault aviation piston engine and above. At present the method used for this failure diagnosis is often depended on the visual inspections in the civil aviation maintenance field. Due to human factors, the visual inspections cannot provide ignition intensity value and ignition equilibrium deviation value among the different spark plugs in the different cylinder of aviation piston engine. So air magneto ignition strength and balance testing has become an aviation piston engine maintenance technical problem needed to resolve. In this paper, the ultraviolet sensor with detection wavelength of 185~260nm and driving voltage of 320V DC is used as the core of ultraviolet detection to detect the ignition intensity of Aviation magneto ignition system and the balance deviation of the ignition intensity of each cylinder. The experimental results show that the rotational speed within the range 0 to 3500 RPM test error less than 0.34%, ignition strength analysis and calculation error is less than 0.13%, and measured the visual inspection is hard to distinguish between high voltage wire leakage failure of deviation value of 200 pulse ignition strength balance/Sec. The method to detect aviation piston engine maintenance of magneto ignition system fault has a certain reference value.

  16. Visualization of light propagation in visible Chinese human head for functional near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Li, Ting; Gong, Hui; Luo, Qingming

    2011-04-01

    Using the visible Chinese human data set, which faithfully represents human anatomy, we visualize the light propagation in the head in detail based on Monte Carlo simulation. The simulation is verified to agree with published experimental results in terms of a differential path-length factor. The spatial sensitivity profile turns out to seem like a fat tropical fish with strong distortion along the folding cerebral surface. The sensitive brain region covers the gray matter and extends to the superficial white matter, leading to a large penetration depth (>3 cm). Finally, the optimal source-detector separation is suggested to be narrowed down to 3-3.5 cm, while the sensitivity of the detected signal to brain activation reaches the peak of 8%. These results indicate that the cerebral cortex folding geometry actually has substantial effects on light propagation, which should be necessarily considered for applications of functional near-infrared spectroscopy.

  17. Effect of display size on visual attention.

    PubMed

    Chen, I-Ping; Liao, Chia-Ning; Yeh, Shih-Hao

    2011-06-01

    Attention plays an important role in the design of human-machine interfaces. However, current knowledge about attention is largely based on data obtained when using devices of moderate display size. With advancement in display technology comes the need for understanding attention behavior over a wider range of viewing sizes. The effect of display size on test participants' visual search performance was studied. The participants (N = 12) performed two types of visual search tasks, that is, parallel and serial search, under three display-size conditions (16 degrees, 32 degrees, and 60 degrees). Serial, but not parallel, search was affected by display size. In the serial task, mean reaction time for detecting a target increased with the display size.

  18. Internal state of monkey primary visual cortex (V1) predicts figure-ground perception.

    PubMed

    Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A F

    2003-04-15

    When stimulus information enters the visual cortex, it is rapidly processed for identification. However, sometimes the processing of the stimulus is inadequate and the subject fails to notice the stimulus. Human psychophysical studies show that this occurs during states of inattention or absent-mindedness. At a neurophysiological level, it remains unclear what these states are. To study the role of cortical state in perception, we analyzed neural activity in the monkey primary visual cortex before the appearance of a stimulus. We show that, before the appearance of a reported stimulus, neural activity was stronger and more correlated than for a not-reported stimulus. This indicates that the strength of neural activity and the functional connectivity between neurons in the primary visual cortex participate in the perceptual processing of stimulus information. Thus, to detect a stimulus, the visual cortex needs to be in an appropriate state.

  19. Attention distributed across sensory modalities enhances perceptual performance

    PubMed Central

    Mishra, Jyoti; Gazzaley, Adam

    2012-01-01

    This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811

  20. Object Segmentation from Motion Discontinuities and Temporal Occlusions–A Biologically Inspired Model

    PubMed Central

    Beck, Cornelia; Ognibeni, Thilo; Neumann, Heiko

    2008-01-01

    Background Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it. Methodology/Principal Findings From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection. Conclusions/Significance A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion. PMID:19043613

  1. Complex for monitoring visual acuity and its application for evaluation of human psycho-physiological state

    NASA Astrophysics Data System (ADS)

    Sorokoumov, P. S.; Khabibullin, T. R.; Tolstaya, A. M.

    2017-01-01

    The existing psychological theories associate the movement of a human eye with its reactions to external change: what we see, hear and feel. By analyzing the glance, we can compare the external human response (which shows the behavior of a person), and the natural reaction (that they actually feels). This article describes the complex for detection of visual activity and its application for evaluation of the psycho-physiological state of a person. The glasses with a camera capture all the movements of the human eye in real time. The data recorded by the camera are transmitted to the computer for processing implemented with the help of the software developed by the authors. The result is given in an informative and an understandable report, which can be used for further analysis. The complex shows a high efficiency and stable operation and can be used both, for the pedagogic personnel recruitment and for testing students during the educational process.

  2. An Automated Classification Technique for Detecting Defects in Battery Cells

    NASA Technical Reports Server (NTRS)

    McDowell, Mark; Gray, Elizabeth

    2006-01-01

    Battery cell defect classification is primarily done manually by a human conducting a visual inspection to determine if the battery cell is acceptable for a particular use or device. Human visual inspection is a time consuming task when compared to an inspection process conducted by a machine vision system. Human inspection is also subject to human error and fatigue over time. We present a machine vision technique that can be used to automatically identify defective sections of battery cells via a morphological feature-based classifier using an adaptive two-dimensional fast Fourier transformation technique. The initial area of interest is automatically classified as either an anode or cathode cell view as well as classified as an acceptable or a defective battery cell. Each battery cell is labeled and cataloged for comparison and analysis. The result is the implementation of an automated machine vision technique that provides a highly repeatable and reproducible method of identifying and quantifying defects in battery cells.

  3. Optical filter for highlighting spectral features part I: design and development of the filter for discrimination of human skin with and without an application of cosmetic foundation.

    PubMed

    Nishino, Ken; Nakamura, Mutsuko; Matsumoto, Masayuki; Tanno, Osamu; Nakauchi, Shigeki

    2011-03-28

    Light reflected from an object's surface contains much information about its physical and chemical properties. Changes in the physical properties of an object are barely detectable in spectra. Conventional trichromatic systems, on the other hand, cannot detect most spectral features because spectral information is compressively represented as trichromatic signals forming a three-dimensional subspace. We propose a method for designing a filter that optically modulates a camera's spectral sensitivity to find an alternative subspace highlighting an object's spectral features more effectively than the original trichromatic space. We designed and developed a filter that detects cosmetic foundations on human face. Results confirmed that the filter can visualize and nondestructively inspect the foundation distribution.

  4. Detection Progress of Selected Drugs in TLC

    PubMed Central

    Pyka, Alina

    2014-01-01

    This entry describes applications of known indicators and dyes as new visualizing reagents and various visualizing systems as well as photocatalytic reactions and bioautography method for the detection of bioactive compounds including drugs and compounds isolated from herbal extracts. Broadening index, detection index, characteristics of densitometric band, modified contrast index, limit of detection, densitometric visualizing index, and linearity range of detected compounds were used for the evaluation of visualizing effects of applied visualizing reagents. It was shown that visualizing effect depends on the chemical structure of the visualizing reagent, the structure of the substance detected, and the chromatographic adsorbent applied. The usefulness of densitometry to direct detection of some drugs was also shown. Quoted papers indicate the detection progress of selected drugs investigated by thin-layer chromatography (TLC). PMID:24551853

  5. Will Brazilian Patented Naturoptic Method for Recovery of Healthy Vision be Helpful Linguistically?

    NASA Astrophysics Data System (ADS)

    de Moraes, Ana Paula; Dos Santos Marques, Rosélia; Mc Leod, Roger David

    2008-10-01

    Naturoptics Inc. extends its patent(s) to further the teaching of vision-restoring process(es), foster cross-linguistic capabilities, and assist in the educational or financial opportunities of individuals and countries. Directors of Naturoptics Inc. hope to achieve this while testing David Matthew Mc Leod's observations that high visual acuity correlates with other mental and sensory processes. He and RDM often noticed that thought concepts (language percepts) are detectable even across species barriers, as when bears, moose, et c. made their intentions known to us in ways we were culturally willing to accept. This addresses aspects of language that seemed related to our understanding of human vision, and how it encodes cortically by spatial frequency content of a visual scene. Words representing the same meaning in two different languages will encode at precisely the same site in the visual cortex. Predictions: ``our memories,'' and cross-species, detection of certain thoughts, if equivalently ``seen'' as images, (spatial frequency content).

  6. Use of optical coherence tomography to evaluate visual acuity and visual field changes in dengue fever.

    PubMed

    Rhee, Taek Kwan; Han, Jung Il

    2014-02-01

    Dengue fever is a viral disease that is transmitted by mosquitoes and affects humans. In rare cases, dengue fever can cause visual impairment, which usually occurs within 1 month after contracting dengue fever and ranges from mild blurring of vision to severe blindness. Visual impairment due to dengue fever can be detected through angiography, retinography, optical coherence tomography (OCT) imaging, electroretinography, event electroencephalography (visually evoked potentials), and visual field analysis. The purpose of this study is to report changes in the eye captured using fluorescein angiography, indocyanine green, and OCT in 3 cases of dengue fever visual impairment associated with consistent visual symptoms and similar retinochoroidopathic changes. The OCT results of the three patients with dengue fever showed thinning of the outer retinal layer and disruption of the inner segment/outer segment (IS/OS) junction. While thinning of the retina outer layer is an irreversible process, disruption of IS/OS junction is reported to be reversible. Follow-up examination of individuals with dengue fever and associated visual impairment should involve the use of OCT to evaluate visual acuity and visual field changes in patients with acute choroidal ischemia.

  7. On the relationship between human search strategies, conspicuity, and search performance

    NASA Astrophysics Data System (ADS)

    Hogervorst, Maarten A.; Bijl, Piet; Toet, Alexander

    2005-05-01

    We determined the relationship between search performance with a limited field of view (FOV) and several scanning- and scene parameters in human observer experiments. The observers (38 trained army scouts) searched through a large search sector for a target (a camouflaged person) on a heath. From trial to trial the target appeared at a different location. With a joystick the observers scanned through a panoramic image (displayed on a PC-monitor) while the scan path was registered. Four conditions were run differing in sensor type (visual or thermal infrared) and window size (large or small). In conditions with a small window size the zoom option could be used. Detection performance was highly dependent on zoom factor and deteriorated when scan speed increased beyond a threshold value. Moreover, the distribution of scan speeds scales with the threshold speed. This indicates that the observers are aware of their limitations and choose a (near) optimal search strategy. We found no correlation between the fraction of detected targets and overall search time for the individual observers, indicating that both are independent measures of individual search performance. Search performance (fraction detected, total search time, time in view for detection) was found to be strongly related to target conspicuity. Moreover, we found the same relationship between search performance and conspicuity for visual and thermal targets. This indicates that search performance can be predicted directly by conspicuity regardless of the sensor type.

  8. Visual saliency detection based on modeling the spatial Gaussianity

    NASA Astrophysics Data System (ADS)

    Ju, Hongbin

    2015-04-01

    In this paper, a novel salient object detection method based on modeling the spatial anomalies is presented. The proposed framework is inspired by the biological mechanism that human eyes are sensitive to the unusual and anomalous objects among complex background. It is supposed that a natural image can be seen as a combination of some similar or dissimilar basic patches, and there is a direct relationship between its saliency and anomaly. Some patches share high degree of similarity and have a vast number of quantity. They usually make up the background of an image. On the other hand, some patches present strong rarity and specificity. We name these patches "anomalies". Generally, anomalous patch is a reflection of the edge or some special colors and textures in an image, and these pattern cannot be well "explained" by their surroundings. Human eyes show great interests in these anomalous patterns, and will automatically pick out the anomalous parts of an image as the salient regions. To better evaluate the anomaly degree of the basic patches and exploit their nonlinear statistical characteristics, a multivariate Gaussian distribution saliency evaluation model is proposed. In this way, objects with anomalous patterns usually appear as the outliers in the Gaussian distribution, and we identify these anomalous objects as salient ones. Experiments are conducted on the well-known MSRA saliency detection dataset. Compared with other recent developed visual saliency detection methods, our method suggests significant advantages.

  9. Stereo chromatic contrast sensitivity model to blue-yellow gratings.

    PubMed

    Yang, Jiachen; Lin, Yancong; Liu, Yun

    2016-03-07

    As a fundamental metric of human visual system (HVS), contrast sensitivity function (CSF) is typically measured by sinusoidal gratings at the detection of thresholds for psychophysically defined cardinal channels: luminance, red-green, and blue-yellow. Chromatic CSF, which is a quick and valid index to measure human visual performance and various retinal diseases in two-dimensional (2D) space, can not be directly applied into the measurement of human stereo visual performance. And no existing perception model considers the influence of chromatic CSF of inclined planes on depth perception in three-dimensional (3D) space. The main aim of this research is to extend traditional chromatic contrast sensitivity characteristics to 3D space and build a model applicable in 3D space, for example, strengthening stereo quality of 3D images. This research also attempts to build a vision model or method to check human visual characteristics of stereo blindness. In this paper, CRT screen was clockwise and anti-clockwise rotated respectively to form the inclined planes. Four inclined planes were selected to investigate human chromatic vision in 3D space and contrast threshold of each inclined plane was measured with 18 observers. Stimuli were isoluminant blue-yellow sinusoidal gratings. Horizontal spatial frequencies ranged from 0.05 to 5 c/d. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. According to the relationship between spatial frequency of inclined plane and horizontal spatial frequency, the chromatic contrast sensitivity characteristics in 3D space have been modeled based on the experimental data. The results show that the proposed model can well predicted human chromatic contrast sensitivity characteristics in 3D space.

  10. An UGS radar with micro-Doppler capabilities for wide area persistent surveillance

    NASA Astrophysics Data System (ADS)

    Tahmoush, Dave; Silvious, Jerry; Clark, John

    2010-04-01

    Detecting humans and distinguishing them from natural fauna is an important issue in security applications to reduce false alarm rates. In particular, it is important to detect and classify people who are walking in remote locations and transmit back detections over extended periods at a low cost and with minimal maintenance. The ability to discriminate men versus animals and vehicles at long range would give a distinct sensor advantage. The reduction in false positive detections due to animals would increase the usefulness of detections, while dismount identification could reduce friendly-fire. We developed and demonstrate a compact radar technology that is scalable to a variety of ultra-lightweight and low-power platforms for wide area persistent surveillance as an unattended, unmanned, and man-portable ground sensor. The radar uses micro-Doppler processing to characterize the tracks of moving targets and to then eliminate unimportant detections due to animals or civilian activity. This paper presents the system and data on humans, vehicles, and animals at multiple angles and directions of motion, demonstrates the signal processing approach that makes the targets visually recognizable, and verifies that the UGS radar has enough micro-Doppler capability to distinguish between humans, vehicles, and animals.

  11. The development of contour processing: evidence from physiology and psychophysics

    PubMed Central

    Taylor, Gemma; Hipp, Daniel; Moser, Alecia; Dickerson, Kelly; Gerhardstein, Peter

    2014-01-01

    Object perception and pattern vision depend fundamentally upon the extraction of contours from the visual environment. In adulthood, contour or edge-level processing is supported by the Gestalt heuristics of proximity, collinearity, and closure. Less is known, however, about the developmental trajectory of contour detection and contour integration. Within the physiology of the visual system, long-range horizontal connections in V1 and V2 are the likely candidates for implementing these heuristics. While post-mortem anatomical studies of human infants suggest that horizontal interconnections reach maturity by the second year of life, psychophysical research with infants and children suggests a considerably more protracted development. In the present review, data from infancy to adulthood will be discussed in order to track the development of contour detection and integration. The goal of this review is thus to integrate the development of contour detection and integration with research regarding the development of underlying neural circuitry. We conclude that the ontogeny of this system is best characterized as a developmentally extended period of associative acquisition whereby horizontal connectivity becomes functional over longer and longer distances, thus becoming able to effectively integrate over greater spans of visual space. PMID:25071681

  12. Dual-color plasmonic enzyme-linked immunosorbent assay based on enzyme-mediated etching of Au nanoparticles

    NASA Astrophysics Data System (ADS)

    Guo, Longhua; Xu, Shaohua; Ma, Xiaoming; Qiu, Bin; Lin, Zhenyu; Chen, Guonan

    2016-09-01

    Colorimetric enzyme-linked immunosorbent assay utilizing 3‧-3-5‧-5-tetramethylbenzidine(TMB) as the chromogenic substrate has been widely used in the hospital for the detection of all kinds of disease biomarkers. Herein, we demonstrate a strategy to change this single-color display into dual-color responses to improve the accuracy of visual inspection. Our investigation firstly reveals that oxidation state of 3‧-3-5‧-5-tetramethylbenzidine (TMB2+) can quantitatively etch gold nanoparticles. Therefore, the incorporation of gold nanoparticles into a commercial TMB-based ELISA kit could generate dual-color responses: the solution color varied gradually from wine red (absorption peak located at ~530 nm) to colorless, and then from colorless to yellow (absorption peak located at ~450 nm) with the increase amount of targets. These dual-color responses effectively improved the sensitivity as well as the accuracy of visual inspection. For example, the proposed dual-color plasmonic ELISA is demonstrated for the detection of prostate-specific antigen (PSA) in human serum with a visual limit of detection (LOD) as low as 0.0093 ng/mL.

  13. Wavefront-Guided Versus Wavefront-Optimized Photorefractive Keratectomy: Visual and Military Task Performance.

    PubMed

    Ryan, Denise S; Sia, Rose K; Stutzman, Richard D; Pasternak, Joseph F; Howard, Robin S; Howell, Christopher L; Maurer, Tana; Torres, Mark F; Bower, Kraig S

    2017-01-01

    To compare visual performance, marksmanship performance, and threshold target identification following wavefront-guided (WFG) versus wavefront-optimized (WFO) photorefractive keratectomy (PRK). In this prospective, randomized clinical trial, active duty U.S. military Soldiers, age 21 or over, electing to undergo PRK were randomized to undergo WFG (n = 27) or WFO (n = 27) PRK for myopia or myopic astigmatism. Binocular visual performance was assessed preoperatively and 1, 3, and 6 months postoperatively: Super Vision Test high contrast, Super Vision Test contrast sensitivity (CS), and 25% contrast acuity with night vision goggle filter. CS function was generated testing at five spatial frequencies. Marksmanship performance in low light conditions was evaluated in a firing tunnel. Target detection and identification performance was tested for probability of identification of varying target sets and probability of detection of humans in cluttered environments. Visual performance, CS function, marksmanship, and threshold target identification demonstrated no statistically significant differences over time between the two treatments. Exploratory regression analysis of firing range tasks at 6 months showed no significant differences or correlations between procedures. Regression analysis of vehicle and handheld probability of identification showed a significant association with pretreatment performance. Both WFG and WFO PRK results translate to excellent and comparable visual and military performance. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.

  14. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    NASA Astrophysics Data System (ADS)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  15. Objective evaluation of the visual acuity in human eyes

    NASA Astrophysics Data System (ADS)

    Rosales, M. A.; López-Olazagasti, E.; Ramírez-Zavaleta, G.; Varillas, G.; Tepichín, E.

    2009-08-01

    Traditionally, the quality of the human vision is evaluated by a subjective test in which the examiner asks the patient to read a series of characters of different sizes, located at a certain distance of the patient. Typically, we need to ensure a subtended angle of vision of 5 minutes, which implies an object of 8.8 mm high located at 6 meters (normal or 20/20 visual acuity). These characters constitute what is known as the Snellen chart, universally used to evaluate the spatial resolution of the human eyes. The mentioned process of identification of characters is carried out by means of the eye - brain system, giving an evaluation of the subjective visual performance. In this work we consider the eye as an isolated image-forming system, and show that it is possible to isolate the function of the eye from that of the brain in this process. By knowing the impulse response of the eye´s system we can obtain, in advance, the image of the Snellen chart simultaneously. From this information, we obtain the objective performance of the eye as the optical system under test. This type of results might help to detect anomalous situations of the human vision, like the so called "cerebral myopia".

  16. Fluorescence in-situ hybridization (FISH) as a tool for visualization and enumeration of Campylobacter in broiler ceca

    USDA-ARS?s Scientific Manuscript database

    Food-borne human pathogens are typically detected and enumerated by either cultural methods or PCR-based approaches. Fluorescence in-situ hybridization (FISH) is a standard microscopy tool for microbial ecology but has not been widely used for food safety applications despite important advantages o...

  17. [Comparison of the diagnostic utility from visual inspection with acetic acid and cervical cytology].

    PubMed

    Velázquez-Hernández, Nadia; Sánchez-Anguiano, Luis Francisco; Lares-Bayona, Edgar Felipe; Cisneros-Pérez, Vicente; Milla-Villeda, Reinaldo Humberto; Arreola-Herrera, Francisco de Asís; Navarrete-Flores, José Antonio; Aguilar-Durán, Maricela; Núñez-Márquez, Teresita; Rueda-Cisneros, Dora Alicia

    2010-05-01

    In Mexico, cervical cancer is the second leading cause of death in women after breast cancer. The human papillomavirus is associated with intraepithelial lesions, detected up to 99.7% of cervical carcinomas. Despite being easy to detect is a condition that many women suffer. To determine the diagnostic utility of the visual inspection with acetic acid of the uterine cervix compared with the cervical cytology. Study of diagnostic tests. The study was realized in the Centro de Atención Materno Infantil y Planificación Familiar of the Instituto de Investigación Científica, Durango, Mexico, research of the Juárez University of the State of Durango, from August 23, 2005 to November 13, 2006. 1,521 participants were examined who went consecutively to opportune detection of cervical cancer. One doctor practiced the test of acetic acid and cervical cytology to them, and one digital photograph, which was evaluated by three inter-observers triple blind. Those that was positive to anyone of these tests, were remitted to colposcopy and/or biopsy; also to 10% of selected negative population randomly was realized this procedure. Sensitivity, specificity, positive and negative predictive values and exactitude were determined. For the agreement inter-observer index of Kappa was used. Sensitivity, specificity, values predictive positive, negative and exactitude for the visual inspection with acetic acid were 20, 97, 5 and 99%, respectively. For the cervical cytology were of 80, 99, 57 and 99%, respectively. The force of agreement between the interobservant was poor. In this study cervical cytology was more useful than visual inspection with acetic acid to detect dysplasias or cervical cancer opportunely, due to detect all the positive true cases confirmed by biopsy.

  18. A software tool for automatic classification and segmentation of 2D/3D medical images

    NASA Astrophysics Data System (ADS)

    Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur

    2013-02-01

    Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.

  19. Bacterial detection: from microscope to smartphone.

    PubMed

    Gopinath, Subash C B; Tang, Thean-Hock; Chen, Yeng; Citartan, Marimuthu; Lakshmipriya, Thangavel

    2014-10-15

    The ubiquitous nature of bacteria enables them to survive in a wide variety of environments. Hence, the rise of various pathogenic species that are harmful to human health raises the need for the development of accurate sensing systems. Sensing systems are necessary for diagnosis and epidemiological control of pathogenic organism, especially in the food-borne pathogen and sanitary water treatment facility' bacterial populations. Bacterial sensing for the purpose of diagnosis can function in three ways: bacterial morphological visualization, specific detection of bacterial component and whole cell detection. This paper provides an overview of the currently available bacterial detection systems that ranges from microscopic observation to state-of-the-art smartphone-based detection. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Grid-texture mechanisms in human vision: Contrast detection of regular sparse micro-patterns requires specialist templates.

    PubMed

    Baker, Daniel H; Meese, Tim S

    2016-07-27

    Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50-100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures.

  1. Grid-texture mechanisms in human vision: Contrast detection of regular sparse micro-patterns requires specialist templates

    PubMed Central

    Baker, Daniel H.; Meese, Tim S.

    2016-01-01

    Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50–100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures. PMID:27460430

  2. Using false colors to protect visual privacy of sensitive content

    NASA Astrophysics Data System (ADS)

    Ćiftçi, Serdar; Korshunov, Pavel; Akyüz, Ahmet O.; Ebrahimi, Touradj

    2015-03-01

    Many privacy protection tools have been proposed for preserving privacy. Tools for protection of visual privacy available today lack either all or some of the important properties that are expected from such tools. Therefore, in this paper, we propose a simple yet effective method for privacy protection based on false color visualization, which maps color palette of an image into a different color palette, possibly after a compressive point transformation of the original pixel data, distorting the details of the original image. This method does not require any prior face detection or other sensitive regions detection and, hence, unlike typical privacy protection methods, it is less sensitive to inaccurate computer vision algorithms. It is also secure as the look-up tables can be encrypted, reversible as table look-ups can be inverted, flexible as it is independent of format or encoding, adjustable as the final result can be computed by interpolating the false color image with the original using different degrees of interpolation, less distracting as it does not create visually unpleasant artifacts, and selective as it preserves better semantic structure of the input. Four different color scales and four different compression functions, one which the proposed method relies, are evaluated via objective (three face recognition algorithms) and subjective (50 human subjects in an online-based study) assessments using faces from FERET public dataset. The evaluations demonstrate that DEF and RBS color scales lead to the strongest privacy protection, while compression functions add little to the strength of privacy protection. Statistical analysis also shows that recognition algorithms and human subjects perceive the proposed protection similarly

  3. Visual defects in a mouse model of fetal alcohol spectrum disorder.

    PubMed

    Lantz, Crystal L; Pulimood, Nisha S; Rodrigues-Junior, Wandilson S; Chen, Ching-Kang; Manhaes, Alex C; Kalatsky, Valery A; Medina, Alexandre Esteves

    2014-01-01

    Alcohol consumption during pregnancy can lead to a multitude of neurological problems in offspring, varying from subtle behavioral changes to severe mental retardation. These alterations are collectively referred to as Fetal Alcohol Spectrum Disorders (FASD). Early alcohol exposure can strongly affect the visual system and children with FASD can exhibit an amblyopia-like pattern of visual acuity deficits even in the absence of optical and oculomotor disruption. Here, we test whether early alcohol exposure can lead to a disruption in visual acuity, using a model of FASD to mimic alcohol consumption in the last months of human gestation. To accomplish this, mice were exposed to ethanol (5 g/kg i.p.) or saline on postnatal days (P) 5, 7, and 9. Two to three weeks later we recorded visually evoked potentials to assess spatial frequency detection and contrast sensitivity, conducted electroretinography (ERG) to further assess visual function and imaged retinotopy using optical imaging of intrinsic signals. We observed that animals exposed to ethanol displayed spatial frequency acuity curves similar to controls. However, ethanol-treated animals showed a significant deficit in contrast sensitivity. Moreover, ERGs revealed a market decrease in both a- and b-waves amplitudes, and optical imaging suggest that both elevation and azimuth maps in ethanol-treated animals have a 10-20° greater map tilt compared to saline-treated controls. Overall, our findings suggest that binge alcohol drinking restricted to the last months of gestation in humans can lead to marked deficits in visual function.

  4. Ratiometric, visual, dual-signal fluorescent sensing and imaging of pH/copper ions in real samples based on carbon dots-fluorescein isothiocyanate composites.

    PubMed

    Zhu, Xinxin; Jin, Hui; Gao, Cuili; Gui, Rijun; Wang, Zonghua

    2017-01-01

    In this article, a facile aqueous synthesis of carbon dots (CDs) was developed by using natural kelp as a new carbon source. Through hydrothermal carbonization of kelp juice, fluorescent CDs were prepared and the CDs' surface was modified with polyethylenimine (PEI). The PEI-modified CDs were conjugated with fluorescein isothiocyanate (FITC) to fabricate CDs-FITC composites. To exploit broad applications, the CDs-FITC composites were developed as fluorescent sensing or imaging platforms of pH and Cu 2+ . Analytical performances of the composites-based fluorescence (FL) sensors were evaluated, including visual FL imaging of pH in glass bottle, ratiometric FL sensing of pH in yogurt samples, visual FL latent fingerprint and leaf imaging detection of [Cu 2+ ], dual-signal FL sensing of [Cu 2+ ] in yogurt and human serum samples. Experimental results from ratiometric, visual, dual-signal FL sensing and imaging applications confirmed the high feasibility, accuracy, stabilization and simplicity of CDs-FITC composites-based FL sensors for the detection of pH and Cu 2+ ions in real samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Application of color mixing for safety and quality inspection of agricultural products

    NASA Astrophysics Data System (ADS)

    Ding, Fujian; Chen, Yud-Ren; Chao, Kuanglin

    2005-11-01

    In this paper, color-mixing applications for food safety and quality was studied, including two-color mixing and three-color mixing. It was shown that the chromaticness of the visual signal resulting from two- or three-color mixing is directly related to the band ratio of light intensity at the two or three selected wavebands. An optical visual device using color mixing to implement the band ratio criterion was presented. Inspection through human vision assisted by an optical device that implements the band ratio criterion would offer flexibility and significant cost savings as compared to inspection with a multispectral machine vision system that implements the same criterion. Example applications of this optical color mixing technique were given for the inspection of chicken carcasses with various diseases and for the detection of chilling injury in cucumbers. Simulation results showed that discrimination by chromaticness that has a direct relation with band ratio can work very well with proper selection of the two or three narrow wavebands. This novel color mixing technique for visual inspection can be implemented on visual devices for a variety of applications, ranging from target detection to food safety inspection.

  6. Human Blue Cone Opsin Regeneration Involves Secondary Retinal Binding with Analog Specificity.

    PubMed

    Srinivasan, Sundaramoorthy; Fernández-Sampedro, Miguel A; Morillo, Margarita; Ramon, Eva; Jiménez-Rosés, Mireia; Cordomí, Arnau; Garriga, Pere

    2018-03-27

    Human color vision is mediated by the red, green, and blue cone visual pigments. Cone opsins are G-protein-coupled receptors consisting of an opsin apoprotein covalently linked to the 11-cis-retinal chromophore. All visual pigments share a common evolutionary origin, and red and green cone opsins exhibit a higher homology, whereas blue cone opsin shows more resemblance to the dim light receptor rhodopsin. Here we show that chromophore regeneration in photoactivated blue cone opsin exhibits intermediate transient conformations and a secondary retinoid binding event with slower binding kinetics. We also detected a fine-tuning of the conformational change in the photoactivated blue cone opsin binding site that alters the retinal isomer binding specificity. Furthermore, the molecular models of active and inactive blue cone opsins show specific molecular interactions in the retinal binding site that are not present in other opsins. These findings highlight the differential conformational versatility of human cone opsin pigments in the chromophore regeneration process, particularly compared to rhodopsin, and point to relevant functional, unexpected roles other than spectral tuning for the cone visual pigments. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  7. Visualization of human heart conduction system by means of fluorescence spectroscopy

    NASA Astrophysics Data System (ADS)

    Venius, Jonas; Bagdonas, Saulius; Žurauskas, Edvardas; Rotomskis, Ricardas

    2011-10-01

    The conduction system of the heart is a specific muscular tissue, where a heartbeat signal originates and initiates the depolarization of the ventricles. The muscular origin makes it complicated to distinguish the conduction system from the surrounding tissues. A surgical intervention can lead to the accidental harm of the conduction system, which may eventually result in a dangerous obstruction of the heart functionality. Therefore, there is an immense necessity for developing a helpful method to visualize the conduction system during the operation time. The specimens for the spectroscopic studies were taken from nine diverse human hearts. The localization of distinct types of the tissue was preliminary marked by the pathologist and approved histologically after the spectral measurements. Variations in intensity, as well as in shape, were detected in autofluorescence spectra of different heart tissues. The most distinct differences were observed between the heart conduction system and the surrounding tissues under 330 and 380 nm excitation. The spectral region around 460 nm appeared to be the most suitable for an unambiguous differentiation of the human conduction system avoiding the absorption peak of blood. The visualization method, based on the intensity ratios calculated for two excitation wavelengths, was also demonstrated.

  8. Psychological diversity in chimpanzees and humans: new longitudinal assessments of chimpanzees' understanding of attention.

    PubMed

    Povinelli, Daniel J; Dunphy-Lelii, Sarah; Reaux, James E; Mazza, Michael P

    2002-01-01

    We present the results of 5 experiments that assessed 7 chimpanzees' understanding of the visual experiences of others. The research was conducted when the animals were adolescents (7-8 years of age) and adults (12 years of age). The experiments examined their ability to recognize the equivalence between visual and tactile modes of gaining the attention of others (Exp. 1), their understanding that the vision of others can be impeded by opaque barriers (Exps. 2 and 5), and their ability to distinguish between postural cues which are and are not specifically relevant to visual attention (Exps. 3 and 4). The results suggest that although chimpanzees are excellent at exploiting the observable contingencies that exist between the facial and bodily postures of other agents on the one hand, and events in the world on the other, these animals may not construe others as possessing psychological states related to 'seeing' or 'attention.' Humans and chimpanzees share homologous suites of psychological systems that detect and process information about both the static and dynamic aspects of social life, but humans alone may possess systems which interpret behavior in terms of abstract, unobservable mental states such as seeing and attention. Copyright 2002 S. Karger AG, Basel

  9. Development of an In Flight Vision Self-Assessment Questionnaire for Long Duration Space Missions

    NASA Technical Reports Server (NTRS)

    Byrne, Vicky E.; Gibson, Charles R.; Pierpoline, Katherine M.

    2010-01-01

    OVERVIEW A NASA Flight Medicine optometrist teamed with a human factors specialist to develop an electronic questionnaire for crewmembers to record their visual acuity test scores and perceived vision assessment. It will be implemented on the International Space Station (ISS) and administered as part of a suite of tools for early detection of potential vision changes. The goal of this effort was to rapidly develop a set of questions to help in early detection of visual (e.g. blurred vision) and/or non-visual (e.g. headaches) symptoms by allowing the ISS crewmembers to think about their own current vision during their spaceflight missions. PROCESS An iterative process began with a Space Shuttle one-page paper questionnaire generated by the optometrist that was updated by applying human factors design principles. It was used as a baseline to establish an electronic questionnaire for ISS missions. Additional questions needed for the ISS missions were included and the information was organized to take advantage of the computer-based file format available. Human factors heuristics were applied to the prototype and then they were reviewed by the optometrist and procedures specialists with rapid-turn around updates that lead to the final questionnaire. CONCLUSIONS With about only a month lead time, a usable tool to collect crewmember assessments was developed through this cross-discipline collaboration. With only a little expenditure of energy, the potential payoff is great. ISS crewmembers will complete the questionnaire at 30 days into the mission, 100 days into the mission and 30 days prior to return to Earth. The systematic layout may also facilitate physicians later data extraction for quick interpretation of the data. The data collected along with other measures (e.g. retinal and ultrasound imaging) at regular intervals could potentially lead to early detection and treatment of related vision problems than using the other measures alone.

  10. Process Mining for Individualized Behavior Modeling Using Wireless Tracking in Nursing Homes

    PubMed Central

    Fernández-Llatas, Carlos; Benedi, José-Miguel; García-Gómez, Juan M.; Traver, Vicente

    2013-01-01

    The analysis of human behavior patterns is increasingly used for several research fields. The individualized modeling of behavior using classical techniques requires too much time and resources to be effective. A possible solution would be the use of pattern recognition techniques to automatically infer models to allow experts to understand individual behavior. However, traditional pattern recognition algorithms infer models that are not readily understood by human experts. This limits the capacity to benefit from the inferred models. Process mining technologies can infer models as workflows, specifically designed to be understood by experts, enabling them to detect specific behavior patterns in users. In this paper, the eMotiva process mining algorithms are presented. These algorithms filter, infer and visualize workflows. The workflows are inferred from the samples produced by an indoor location system that stores the location of a resident in a nursing home. The visualization tool is able to compare and highlight behavior patterns in order to facilitate expert understanding of human behavior. This tool was tested with nine real users that were monitored for a 25-week period. The results achieved suggest that the behavior of users is continuously evolving and changing and that this change can be measured, allowing for behavioral change detection. PMID:24225907

  11. Observed touch on a non-human face is not remapped onto the human observer's own face.

    PubMed

    Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta

    2013-01-01

    Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer.

  12. Observed Touch on a Non-Human Face Is Not Remapped onto the Human Observer's Own Face

    PubMed Central

    Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta

    2013-01-01

    Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear. This study tested whether VRT would occur when seeing touch on monkey faces and whether it would be similarly modulated by facial expressions. Human participants detected near-threshold tactile stimulation on their own cheeks while watching fearful, happy, and neutral human or monkey faces being concurrently touched or merely approached by fingers. We predicted minimal VRT for neutral and happy monkey faces but greater VRT for fearful monkey faces. The results with human faces replicated previous findings, demonstrating stronger VRT for fearful expressions than for happy or neutral expressions. However, there was no VRT (i.e. no difference between accuracy in touch and no-touch trials) for any of the monkey faces, regardless of facial expression, suggesting that touch on a non-human face is not remapped onto the somatosensory system of the human observer. PMID:24250781

  13. Change detection and change blindness in pigeons (Columba livia).

    PubMed

    Herbranson, Walter T; Trinh, Yvan T; Xi, Patricia M; Arand, Mark P; Barker, Michael S K; Pratt, Theodore H

    2014-05-01

    Change blindness is a phenomenon in which even obvious details in a visual scene change without being noticed. Although change blindness has been studied extensively in humans, we do not yet know if it is a phenomenon that also occurs in other animals. Thus, investigation of change blindness in a nonhuman species may prove to be valuable by beginning to provide some insight into its ultimate causes. Pigeons learned a change detection task in which pecks to the location of a change in a sequence of stimulus displays were reinforced. They were worse at detecting changes if the stimulus displays were separated by a brief interstimulus interval, during which the display was blank, and this primary result matches the general pattern seen in previous studies of change blindness in humans. A second experiment attempted to identify specific stimulus characteristics that most reliably produced a failure to detect changes. Change detection was more difficult when interstimulus intervals were longer and when the change was iterated fewer times. ©2014 APA, all rights reserved.

  14. Simultaneous Recordings of Human Microsaccades and Drifts with a Contemporary Video Eye Tracker and the Search Coil Technique

    PubMed Central

    McCamy, Michael B.; Otero-Millan, Jorge; Leigh, R. John; King, Susan A.; Schneider, Rosalyn M.; Macknik, Stephen L.; Martinez-Conde, Susana

    2015-01-01

    Human eyes move continuously, even during visual fixation. These “fixational eye movements” (FEMs) include microsaccades, intersaccadic drift and oculomotor tremor. Research in human FEMs has grown considerably in the last decade, facilitated by the manufacture of noninvasive, high-resolution/speed video-oculography eye trackers. Due to the small magnitude of FEMs, obtaining reliable data can be challenging, however, and depends critically on the sensitivity and precision of the eye tracking system. Yet, no study has conducted an in-depth comparison of human FEM recordings obtained with the search coil (considered the gold standard for measuring microsaccades and drift) and with contemporary, state-of-the art video trackers. Here we measured human microsaccades and drift simultaneously with the search coil and a popular state-of-the-art video tracker. We found that 95% of microsaccades detected with the search coil were also detected with the video tracker, and 95% of microsaccades detected with video tracking were also detected with the search coil, indicating substantial agreement between the two systems. Peak/mean velocities and main sequence slopes of microsaccades detected with video tracking were significantly higher than those of the same microsaccades detected with the search coil, however. Ocular drift was significantly correlated between the two systems, but drift speeds were higher with video tracking than with the search coil. Overall, our combined results suggest that contemporary video tracking now approaches the search coil for measuring FEMs. PMID:26035820

  15. Visual and efficient immunosensor technique for advancing biomedical applications of quantum dots on Salmonella detection and isolation

    NASA Astrophysics Data System (ADS)

    Tang, Feng; Pang, Dai-Wen; Chen, Zhi; Shao, Jian-Bo; Xiong, Ling-Hong; Xiang, Yan-Ping; Xiong, Yan; Wu, Kai; Ai, Hong-Wu; Zhang, Hui; Zheng, Xiao-Li; Lv, Jing-Rui; Liu, Wei-Yong; Hu, Hong-Bing; Mei, Hong; Zhang, Zhen; Sun, Hong; Xiang, Yun; Sun, Zi-Yong

    2016-02-01

    It is a great challenge in nanotechnology for fluorescent nanobioprobes to be applied to visually detect and directly isolate pathogens in situ. A novel and visual immunosensor technique for efficient detection and isolation of Salmonella was established here by applying fluorescent nanobioprobes on a specially-designed cellulose-based swab (a solid-phase enrichment system). The selective and chromogenic medium used on this swab can achieve the ultrasensitive amplification of target bacteria and form chromogenic colonies in situ based on a simple biochemical reaction. More importantly, because this swab can serve as an attachment site for the targeted pathogens to immobilize and immunologically capture nanobioprobes, our mAb-conjugated QD bioprobes were successfully applied on the solid-phase enrichment system to capture the fluorescence of targeted colonies under a designed excitation light instrument based on blue light-emitting diodes combined with stereomicroscopy or laser scanning confocal microscopy. Compared with the traditional methods using 4-7 days to isolate Salmonella from the bacterial mixture, this method took only 2 days to do this, and the process of initial screening and preliminary diagnosis can be completed in only one and a half days. Furthermore, the limit of detection can reach as low as 101 cells per mL Salmonella on the background of 105 cells per mL non-Salmonella (Escherichia coli, Proteus mirabilis or Citrobacter freundii, respectively) in experimental samples, and even in human anal ones. The visual and efficient immunosensor technique may be proved to be a favorable alternative for screening and isolating Salmonella in a large number of samples related to public health surveillance.It is a great challenge in nanotechnology for fluorescent nanobioprobes to be applied to visually detect and directly isolate pathogens in situ. A novel and visual immunosensor technique for efficient detection and isolation of Salmonella was established here by applying fluorescent nanobioprobes on a specially-designed cellulose-based swab (a solid-phase enrichment system). The selective and chromogenic medium used on this swab can achieve the ultrasensitive amplification of target bacteria and form chromogenic colonies in situ based on a simple biochemical reaction. More importantly, because this swab can serve as an attachment site for the targeted pathogens to immobilize and immunologically capture nanobioprobes, our mAb-conjugated QD bioprobes were successfully applied on the solid-phase enrichment system to capture the fluorescence of targeted colonies under a designed excitation light instrument based on blue light-emitting diodes combined with stereomicroscopy or laser scanning confocal microscopy. Compared with the traditional methods using 4-7 days to isolate Salmonella from the bacterial mixture, this method took only 2 days to do this, and the process of initial screening and preliminary diagnosis can be completed in only one and a half days. Furthermore, the limit of detection can reach as low as 101 cells per mL Salmonella on the background of 105 cells per mL non-Salmonella (Escherichia coli, Proteus mirabilis or Citrobacter freundii, respectively) in experimental samples, and even in human anal ones. The visual and efficient immunosensor technique may be proved to be a favorable alternative for screening and isolating Salmonella in a large number of samples related to public health surveillance. Electronic supplementary information (ESI) available: One additional figure (Fig. S1), two additional tables (Tables S1 and S2) and additional information. See DOI: 10.1039/c5nr07424j

  16. Clustering and Recurring Anomaly Identification: Recurring Anomaly Detection System (ReADS)

    NASA Technical Reports Server (NTRS)

    McIntosh, Dawn

    2006-01-01

    This viewgraph presentation reviews the Recurring Anomaly Detection System (ReADS). The Recurring Anomaly Detection System is a tool to analyze text reports, such as aviation reports and maintenance records: (1) Text clustering algorithms group large quantities of reports and documents; Reduces human error and fatigue (2) Identifies interconnected reports; Automates the discovery of possible recurring anomalies; (3) Provides a visualization of the clusters and recurring anomalies We have illustrated our techniques on data from Shuttle and ISS discrepancy reports, as well as ASRS data. ReADS has been integrated with a secure online search

  17. Digital tripwire: a small automated human detection system

    NASA Astrophysics Data System (ADS)

    Fischer, Amber D.; Redd, Emmett; Younger, A. Steven

    2009-05-01

    A low cost, lightweight, easily deployable imaging sensor that can dependably discriminate threats from other activities within its field of view and, only then, alert the distant duty officer by transmitting a visual confirmation of the threat would provide a valuable asset to modern defense. At present, current solutions suffer from a multitude of deficiencies - size, cost, power endurance, but most notably, an inability to assess an image and conclude that it contains a threat. The human attention span cannot maintain critical surveillance over banks of displays constantly conveying such images from the field. DigitalTripwire is a small, self-contained, automated human-detection system capable of running for 1-5 days on two AA batteries. To achieve such long endurance, the DigitalTripwire system utilizes an FPGA designed with sleep functionality. The system uses robust vision algorithms, such as a partially unsupervised innovative backgroundmodeling algorithm, which employ several data reduction strategies to operate in real-time, and achieve high detection rates. When it detects human activity, either mounted or dismounted, it sends an alert including images to notify the command center. In this paper, we describe the hardware and software design of the DigitalTripwire system. In addition, we provide detection and false alarm rates across several challenging data sets demonstrating the performance of the vision algorithms in autonomously analyzing the video stream and classifying moving objects into four primary categories - dismounted human, vehicle, non-human, or unknown. Performance results across several challenging data sets are provided.

  18. Graded Neuronal Modulations Related to Visual Spatial Attention.

    PubMed

    Mayo, J Patrick; Maunsell, John H R

    2016-05-11

    Studies of visual attention in monkeys typically measure neuronal activity when the stimulus event to be detected occurs at a cued location versus when it occurs at an uncued location. But this approach does not address how neuronal activity changes relative to conditions where attention is unconstrained by cueing. Human psychophysical studies have used neutral cueing conditions and found that neutrally cued behavioral performance is generally intermediate to that of cued and uncued conditions (Posner et al., 1978; Mangun and Hillyard, 1990; Montagna et al., 2009). To determine whether the neuronal correlates of visual attention during neutral cueing are similarly intermediate, we trained macaque monkeys to detect changes in stimulus orientation that were more likely to occur at one location (cued) than another (uncued), or were equally likely to occur at either stimulus location (neutral). Consistent with human studies, performance was best when the location was cued, intermediate when both locations were neutrally cued, and worst when the location was uncued. Neuronal modulations in visual area V4 were also graded as a function of cue validity and behavioral performance. By recording from both hemispheres simultaneously, we investigated the possibility of switching attention between stimulus locations during neutral cueing. The results failed to support a unitary "spotlight" of attention. Overall, our findings indicate that attention-related changes in V4 are graded to accommodate task demands. Studies of the neuronal correlates of attention in monkeys typically use visual cues to manipulate where attention is focused ("cued" vs "uncued"). Human psychophysical studies often also include neutrally cued trials to study how attention naturally varies between points of interest. But the neuronal correlates of this neutral condition are unclear. We measured behavioral performance and neuronal activity in cued, uncued, and neutrally cued blocks of trials. Behavioral performance and neuronal responses during neutral cueing were intermediate to those of the cued and uncued conditions. We found no signatures of a single mechanism of attention that switches between stimulus locations. Thus, attention-related changes in neuronal activity are largely hemisphere-specific and graded according to task demands. Copyright © 2016 the authors 0270-6474/16/365353-09$15.00/0.

  19. Graded Neuronal Modulations Related to Visual Spatial Attention

    PubMed Central

    Maunsell, John H. R.

    2016-01-01

    Studies of visual attention in monkeys typically measure neuronal activity when the stimulus event to be detected occurs at a cued location versus when it occurs at an uncued location. But this approach does not address how neuronal activity changes relative to conditions where attention is unconstrained by cueing. Human psychophysical studies have used neutral cueing conditions and found that neutrally cued behavioral performance is generally intermediate to that of cued and uncued conditions (Posner et al., 1978; Mangun and Hillyard, 1990; Montagna et al., 2009). To determine whether the neuronal correlates of visual attention during neutral cueing are similarly intermediate, we trained macaque monkeys to detect changes in stimulus orientation that were more likely to occur at one location (cued) than another (uncued), or were equally likely to occur at either stimulus location (neutral). Consistent with human studies, performance was best when the location was cued, intermediate when both locations were neutrally cued, and worst when the location was uncued. Neuronal modulations in visual area V4 were also graded as a function of cue validity and behavioral performance. By recording from both hemispheres simultaneously, we investigated the possibility of switching attention between stimulus locations during neutral cueing. The results failed to support a unitary “spotlight” of attention. Overall, our findings indicate that attention-related changes in V4 are graded to accommodate task demands. SIGNIFICANCE STATEMENT Studies of the neuronal correlates of attention in monkeys typically use visual cues to manipulate where attention is focused (“cued” vs “uncued”). Human psychophysical studies often also include neutrally cued trials to study how attention naturally varies between points of interest. But the neuronal correlates of this neutral condition are unclear. We measured behavioral performance and neuronal activity in cued, uncued, and neutrally cued blocks of trials. Behavioral performance and neuronal responses during neutral cueing were intermediate to those of the cued and uncued conditions. We found no signatures of a single mechanism of attention that switches between stimulus locations. Thus, attention-related changes in neuronal activity are largely hemisphere-specific and graded according to task demands. PMID:27170131

  20. Key-Node-Separated Graph Clustering and Layouts for Human Relationship Graph Visualization.

    PubMed

    Itoh, Takayuki; Klein, Karsten

    2015-01-01

    Many graph-drawing methods apply node-clustering techniques based on the density of edges to find tightly connected subgraphs and then hierarchically visualize the clustered graphs. However, users may want to focus on important nodes and their connections to groups of other nodes for some applications. For this purpose, it is effective to separately visualize the key nodes detected based on adjacency and attributes of the nodes. This article presents a graph visualization technique for attribute-embedded graphs that applies a graph-clustering algorithm that accounts for the combination of connections and attributes. The graph clustering step divides the nodes according to the commonality of connected nodes and similarity of feature value vectors. It then calculates the distances between arbitrary pairs of clusters according to the number of connecting edges and the similarity of feature value vectors and finally places the clusters based on the distances. Consequently, the technique separates important nodes that have connections to multiple large clusters and improves the visibility of such nodes' connections. To test this technique, this article presents examples with human relationship graph datasets, including a coauthorship and Twitter communication network dataset.

  1. The footprints of visual attention in the Posner cueing paradigm revealed by classification images

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Shimozaki, Steven S.; Abbey, Craig K.

    2002-01-01

    In the Posner cueing paradigm, observers' performance in detecting a target is typically better in trials in which the target is present at the cued location than in trials in which the target appears at the uncued location. This effect can be explained in terms of a Bayesian observer where visual attention simply weights the information differently at the cued (attended) and uncued (unattended) locations without a change in the quality of processing at each location. Alternatively, it could also be explained in terms of visual attention changing the shape of the perceptual filter at the cued location. In this study, we use the classification image technique to compare the human perceptual filters at the cued and uncued locations in a contrast discrimination task. We did not find statistically significant differences between the shapes of the inferred perceptual filters across the two locations, nor did the observed differences account for the measured cueing effects in human observers. Instead, we found a difference in the magnitude of the classification images, supporting the idea that visual attention changes the weighting of information at the cued and uncued location, but does not change the quality of processing at each individual location.

  2. Near-instant automatic access to visually presented words in the human neocortex: neuromagnetic evidence.

    PubMed

    Shtyrov, Yury; MacGregor, Lucy J

    2016-05-24

    Rapid and efficient processing of external information by the brain is vital to survival in a highly dynamic environment. The key channel humans use to exchange information is language, but the neural underpinnings of its processing are still not fully understood. We investigated the spatio-temporal dynamics of neural access to word representations in the brain by scrutinising the brain's activity elicited in response to psycholinguistically, visually and phonologically matched groups of familiar words and meaningless pseudowords. Stimuli were briefly presented on the visual-field periphery to experimental participants whose attention was occupied with a non-linguistic visual feature-detection task. The neural activation elicited by these unattended orthographic stimuli was recorded using multi-channel whole-head magnetoencephalography, and the timecourse of lexically-specific neuromagnetic responses was assessed in sensor space as well as at the level of cortical sources, estimated using individual MR-based distributed source reconstruction. Our results demonstrate a neocortical signature of automatic near-instant access to word representations in the brain: activity in the perisylvian language network characterised by specific activation enhancement for familiar words, starting as early as ~70 ms after the onset of unattended word stimuli and underpinned by temporal and inferior-frontal cortices.

  3. Sensitivity to synchronicity of biological motion in normal and amblyopic vision

    PubMed Central

    Luu, Jennifer Y.; Levi, Dennis M.

    2017-01-01

    Amblyopia is a developmental disorder of spatial vision that results from abnormal early visual experience usually due to the presence of strabismus, anisometropia, or both strabismus and anisometropia. Amblyopia results in a range of visual deficits that cannot be corrected by optics because the deficits reflect neural abnormalities. Biological motion refers to the motion patterns of living organisms, and is normally displayed as points of lights positioned at the major joints of the body. In this experiment, our goal was twofold. We wished to examine whether the human visual system in people with amblyopia retained the higher-level processing capabilities to extract visual information from the synchronized actions of others, therefore retaining the ability to detect biological motion. Specifically, we wanted to determine if the synchronized interaction of two agents performing a dancing routine allowed the amblyopic observer to use the actions of one agent to predict the expected actions of a second agent. We also wished to establish whether synchronicity sensitivity (detection of synchronized versus desynchronized interactions) is impaired in amblyopic observers relative to normal observers. The two aims are differentiated in that the first aim looks at whether synchronized actions result in improved expected action predictions while the second aim quantitatively compares synchronicity sensitivity, or the ratio of desynchronized to synchronized detection sensitivities, to determine if there is a difference between normal and amblyopic observers. Our results show that the ability to detect biological motion requires more samples in both eyes of amblyopes than in normal control observers. The increased sample threshold is not the result of low-level losses but may reflect losses in feature integration due to undersampling in the amblyopic visual system. However, like normal observers, amblyopes are more sensitive to synchronized versus desynchronized interactions, indicating that higher-level processing of biological motion remains intact. We also found no impairment in synchronicity sensitivity in the amblyopic visual system relative to the normal visual system. Since there is no impairment in synchronicity sensitivity in either the nonamblyopic or amblyopic eye of amblyopes, our results suggest that the higher order processing of biological motion is intact. PMID:23474301

  4. Institute for the Study of Human Capabilities: Summary Descriptions of Research for the Period June 1, 1990 through May 31, 1991

    DTIC Science & Technology

    1991-07-23

    and H. Schroeder (eds.), Proceedings of the International Fechner Symposium. Am- sterdam: North Holland. VanZandt, T. and Townsend, J.R. (Submitted...Smith, L. B. (Forthcoming). A connectionist model of the development of the notion of sameness. Thirteenth Annual Conference of the Cognitive Science...1987). A detection theory method for the analysis of visual and auditory displays. Proceedings of the 31st Annual Meeting of the Human Factors

  5. Automatic Detection of Mitosis and Nuclei From Cytogenetic Images by CellProfiler Software for Mitotic Index Estimation.

    PubMed

    González, Jorge Ernesto; Radl, Analía; Romero, Ivonne; Barquinero, Joan Francesc; García, Omar; Di Giorgio, Marina

    2016-12-01

    Mitotic Index (MI) estimation expressed as percentage of mitosis plays an important role as quality control endpoint. To this end, MI is applied to check the lot of media and reagents to be used throughout the assay and also to check cellular viability after blood sample shipping, indicating satisfactory/unsatisfactory conditions for the progression of cell culture. The objective of this paper was to apply the CellProfiler open-source software for automatic detection of mitotic and nuclei figures from digitized images of cultured human lymphocytes for MI assessment, and to compare its performance to that performed through semi-automatic and visual detection. Lymphocytes were irradiated and cultured for mitosis detection. Sets of images from cultures were analyzed visually and findings were compared with those using CellProfiler software. The CellProfiler pipeline includes the detection of nuclei and mitosis with 80% sensitivity and more than 99% specificity. We conclude that CellProfiler is a reliable tool for counting mitosis and nuclei from cytogenetic images, saves considerable time compared to manual operation and reduces the variability derived from the scoring criteria of different scorers. The CellProfiler automated pipeline achieves good agreement with visual counting workflow, i.e. it allows fully automated mitotic and nuclei scoring in cytogenetic images yielding reliable information with minimal user intervention. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Visual detection of Brucella in bovine biological samples using DNA-activated gold nanoparticles

    PubMed Central

    Kumar, Satish; Kaur, Gurpreet; Ali, Syed Atif; Shrivastava, Sameer; Gupta, Praveen K.; Cooper, Jonathan M.; Chaudhuri, Pallab

    2017-01-01

    Brucellosis is a bacterial disease, which, although affecting cattle primarily, has been associated with human infections, making its detection an important challenge. The existing gold standard diagnosis relies on the culture of bacteria which is a lengthy and costly process, taking up to 45 days. New technologies based on molecular diagnosis have been proposed, either through dip-stick, immunological assays, which have limited specificity, or using nucleic acid tests, which enable to identify the pathogen, but are impractical for use in the field, where most of the reservoir cases are located. Here we demonstrate a new test based on hybridization assays with metal nanoparticles, which, upon detection of a specific pathogen-derived DNA sequence, yield a visual colour change. We characterise the components used in the assay with a range of analytical techniques and show sensitivities down to 1000 cfu/ml for the detection of Brucella. Finally, we demonstrate that the assay works in a range of bovine samples including semen, milk and urine, opening up the potential for its use in the field, in low-resource settings. PMID:28719613

  7. Supporting dynamic change detection: using the right tool for the task.

    PubMed

    Vallières, Benoît R; Hodgetts, Helen M; Vachon, François; Tremblay, Sébastien

    2016-01-01

    Detecting task-relevant changes in a visual scene is necessary for successfully monitoring and managing dynamic command and control situations. Change blindness-the failure to notice visual changes-is an important source of human error. Change History EXplicit (CHEX) is a tool developed to aid change detection and maintain situation awareness; and in the current study we test the generality of its ability to facilitate the detection of changes when this subtask is embedded within a broader dynamic decision-making task. A multitasking air-warfare simulation required participants to perform radar-based subtasks, for which change detection was a necessary aspect of the higher-order goal of protecting one's own ship. In this task, however, CHEX rendered the operator even more vulnerable to attentional failures in change detection and increased perceived workload. Such support was only effective when participants performed a change detection task without concurrent subtasks. Results are interpreted in terms of the NSEEV model of attention behavior (Steelman, McCarley, & Wickens, Hum. Factors 53:142-153, 2011; J. Exp. Psychol. Appl. 19:403-419, 2013), and suggest that decision aids for use in multitasking contexts must be designed to fit within the available workload capacity of the user so that they may truly augment cognition.

  8. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    PubMed

    Kidney, Darren; Rawson, Benjamin M; Borchers, David L; Stevenson, Ben C; Marques, Tiago A; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method an attractive option in many situations where populations can be surveyed acoustically by humans.

  9. Spatial vision in older adults: perceptual changes and neural bases.

    PubMed

    McKendrick, Allison M; Chan, Yu Man; Nguyen, Bao N

    2018-05-17

    The number of older adults is rapidly increasing internationally, leading to a significant increase in research on how healthy ageing impacts vision. Most clinical assessments of spatial vision involve simple detection (letter acuity, grating contrast sensitivity, perimetry). However, most natural visual environments are more spatially complicated, requiring contrast discrimination, and the delineation of object boundaries and contours, which are typically present on non-uniform backgrounds. In this review we discuss recent research that reports on the effects of normal ageing on these more complex visual functions, specifically in the context of recent neurophysiological studies. Recent research has concentrated on understanding the effects of healthy ageing on neural responses within the visual pathway in animal models. Such neurophysiological research has led to numerous, subsequently tested, hypotheses regarding the likely impact of healthy human ageing on specific aspects of spatial vision. Healthy normal ageing impacts significantly on spatial visual information processing from the retina through to visual cortex. Some human data validates that obtained from studies of animal physiology, however some findings indicate that rethinking of presumed neural substrates is required. Notably, not all spatial visual processes are altered by age. Healthy normal ageing impacts significantly on some spatial visual processes (in particular centre-surround tasks), but leaves contrast discrimination, contrast adaptation, and orientation discrimination relatively intact. The study of older adult vision contributes to knowledge of the brain mechanisms altered by the ageing process, can provide practical information regarding visual environments that older adults may find challenging, and may lead to new methods of assessing visual performance in clinical environments. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.

  10. Observer performance in semi-automated microbleed detection

    NASA Astrophysics Data System (ADS)

    Kuijf, Hugo J.; Brundel, Manon; de Bresser, Jeroen; Viergever, Max A.; Biessels, Geert Jan; Geerlings, Mirjam I.; Vincken, Koen L.

    2013-03-01

    Cerebral microbleeds are small bleedings in the human brain, detectable with MRI. Microbleeds are associated with vascular disease and dementia. The number of studies involving microbleed detection is increasing rapidly. Visual rating is the current standard for detection, but is a time-consuming process, especially at high-resolution 7.0 T MR images, has limited reproducibility and is highly observer dependent. Recently, multiple techniques have been published for the semi-automated detection of microbleeds, attempting to overcome these problems. In the present study, a 7.0 T dual-echo gradient echo MR image was acquired in 18 participants with microbleeds from the SMART study. Two experienced observers identified 54 microbleeds in these participants, using a validated visual rating scale. The radial symmetry transform (RST) can be used for semi-automated detection of microbleeds in 7.0 T MR images. In the present study, the results of the RST were assessed by two observers and 47 microbleeds were identified: 35 true positives and 12 extra positives (microbleeds that were missed during visual rating). Hence, after scoring a total number of 66 microbleeds could be identified in the 18 participants. The use of the RST increased the average sensitivity of observers from 59% to 69%. More importantly, inter-observer agreement (ICC and Dice's coefficient) increased from 0.85 and 0.64 to 0.98 and 0.96, respectively. Furthermore, the required rating time was reduced from 30 to 2 minutes per participant. By fine-tuning the RST, sensitivities up to 90% can be achieved, at the cost of extra false positives.

  11. Biophysics of object segmentation in a collision-detecting neuron

    PubMed Central

    Dewell, Richard Burkett

    2018-01-01

    Collision avoidance is critical for survival, including in humans, and many species possess visual neurons exquisitely sensitive to objects approaching on a collision course. Here, we demonstrate that a collision-detecting neuron can detect the spatial coherence of a simulated impending object, thereby carrying out a computation akin to object segmentation critical for proper escape behavior. At the cellular level, object segmentation relies on a precise selection of the spatiotemporal pattern of synaptic inputs by dendritic membrane potential-activated channels. One channel type linked to dendritic computations in many neural systems, the hyperpolarization-activated cation channel, HCN, plays a central role in this computation. Pharmacological block of HCN channels abolishes the neuron's spatial selectivity and impairs the generation of visually guided escape behaviors, making it directly relevant to survival. Additionally, our results suggest that the interaction of HCN and inactivating K+ channels within active dendrites produces neuronal and behavioral object specificity by discriminating between complex spatiotemporal synaptic activation patterns. PMID:29667927

  12. Infrared imaging of the crime scene: possibilities and pitfalls.

    PubMed

    Edelman, Gerda J; Hoveling, Richelle J M; Roos, Martin; van Leeuwen, Ton G; Aalders, Maurice C G

    2013-09-01

    All objects radiate infrared energy invisible to the human eye, which can be imaged by infrared cameras, visualizing differences in temperature and/or emissivity of objects. Infrared imaging is an emerging technique for forensic investigators. The rapid, nondestructive, and noncontact features of infrared imaging indicate its suitability for many forensic applications, ranging from the estimation of time of death to the detection of blood stains on dark backgrounds. This paper provides an overview of the principles and instrumentation involved in infrared imaging. Difficulties concerning the image interpretation due to different radiation sources and different emissivity values within a scene are addressed. Finally, reported forensic applications are reviewed and supported by practical illustrations. When introduced in forensic casework, infrared imaging can help investigators to detect, to visualize, and to identify useful evidence nondestructively. © 2013 American Academy of Forensic Sciences.

  13. Laboratory Evaluation of a Smartphone-Based Electronic Reader of Rapid Dual Point-of-Care Tests for Antibodies to Human Immunodeficiency Virus and Treponema pallidum Infections.

    PubMed

    Herbst de Cortina, Sasha; Bristow, Claire C; Humphries, Romney; Vargas, Silver Keith; Konda, Kelika A; Caceres, Carlos F; Klausner, Jeffrey D

    2017-07-01

    Dual point-of-care tests for antibodies to human immunodeficiency virus (HIV) and Treponema pallidum allow for same-day testing and treatment and have been demonstrated to be cost-effective in preventing the adverse outcomes of HIV infection and syphilis. By recording and transmitting data as they are collected, electronic readers address challenges related to the decentralization of point-of-care testing. We evaluated a smartphone-based electronic reader using 201 sera tested with 2 dual rapid tests for detection of antibodies to HIV and T. pallidum in Los Angeles, USA, and Lima, Peru. Tests were read both visually and with the electronic reader. Enzyme immunoassay followed by Western blot and T. pallidum particle agglutination were the reference tests for HIV and T. pallidum, respectively. The sensitivities of the 2 rapid tests for detection of HIV were 94.1% and 97.0% for electronic readings. Both tests had a specificity of 100% for detection of HIV by electronic reading. The sensitivities of the 2 rapid tests for detection of T. pallidum were 86.5% and 92.4% for electronic readings. The specificities for detection of T. pallidum were 99.1% and 99.0% by electronic reading. There were no significant differences between the accuracies of visual and electronic readings, and the performance did not differ between the 2 study sites. Our results show the electronic reader to be a promising option for increasing the use of point-of-care testing programs.

  14. Maintaining perceptual constancy while remaining vigilant: left hemisphere change blindness and right hemisphere vigilance.

    PubMed

    Vos, Leia; Whitman, Douglas

    2014-01-01

    A considerable literature suggests that the right hemisphere is dominant in vigilance for novel and survival-related stimuli, such as predators, across a wide range of species. In contrast to vigilance for change, change blindness is a failure to detect obvious changes in a visual scene when they are obscured by a disruption in scene presentation. We studied lateralised change detection using a series of scenes with salient changes in either the left or right visual fields. In Study 1 left visual field changes were detected more rapidly than right visual field changes, confirming a right hemisphere advantage for change detection. Increasing stimulus difficulty resulted in greater right visual field detections and left hemisphere detection was more likely when change occurred in the right visual field on a prior trial. In Study 2 an intervening distractor task disrupted the influence of prior trials. Again, faster detection speeds were observed for the left visual field changes with a shift to a right visual field advantage with increasing time-to-detection. This suggests that a right hemisphere role for vigilance, or catching attention, and a left hemisphere role for target evaluation, or maintaining attention, is present at the earliest stage of change detection.

  15. The Gap Detection Test: Can It Be Used to Diagnose Tinnitus?

    PubMed

    Boyen, Kris; Başkent, Deniz; van Dijk, Pim

    2015-01-01

    Animals with induced tinnitus showed difficulties in detecting silent gaps in sounds, suggesting that the tinnitus percept may be filling the gap. The main purpose of this study was to evaluate the applicability of this approach to detect tinnitus in human patients. The authors first hypothesized that gap detection would be impaired in patients with tinnitus, and second, that gap detection would be more impaired at frequencies close to the tinnitus frequency of the patient. Twenty-two adults with bilateral tinnitus, 20 age-matched and hearing loss-matched subjects without tinnitus, and 10 young normal-hearing subjects participated in the study. To determine the characteristics of the tinnitus, subjects matched an external sound to their perceived tinnitus in pitch and loudness. To determine the minimum detectable gap, the gap threshold, an adaptive psychoacoustic test was performed three times by each subject. In this gap detection test, four different stimuli, with various frequencies and bandwidths, were presented at three intensity levels each. Similar to previous reports of gap detection, increasing sensation level yielded shorter gap thresholds for all stimuli in all groups. Interestingly, the tinnitus group did not display elevated gap thresholds in any of the four stimuli. Moreover, visual inspection of the data revealed no relation between gap detection performance and perceived tinnitus pitch. These findings show that tinnitus in humans has no effect on the ability to detect gaps in auditory stimuli. Thus, the testing procedure in its present form is not suitable for clinical detection of tinnitus in humans.

  16. Visual pop-out in barn owls: Human-like behavior in the avian brain.

    PubMed

    Orlowski, Julius; Beissel, Christian; Rohn, Friederike; Adato, Yair; Wagner, Hermann; Ben-Shahar, Ohad

    2015-01-01

    Visual pop-out is a phenomenon by which the latency to detect a target in a scene is independent of the number of other elements, the distractors. Pop-out is an effective visual-search guidance that occurs typically when the target is distinct in one feature from the distractors, thus facilitating fast detection of predators or prey. However, apart from studies on primates, pop-out has been examined in few species and demonstrated thus far in rats, archer fish, and pigeons only. To fill this gap, here we study pop-out in barn owls. These birds are a unique model system for such exploration because their lack of eye movements dictates visual behavior dominated by head movements. Head saccades and interspersed fixation periods can therefore be tracked and analyzed with a head-mounted wireless microcamera--the OwlCam. Using this methodology we confronted two owls with scenes containing search arrays of one target among varying numbers (15-63) of similar looking distractors. We tested targets distinct either by orientation (Experiment 1) or luminance contrast (Experiment 2). Search time and the number of saccades until the target was fixated remained largely independent of the number of distractors in both experiments. This suggests that barn owls can exhibit pop-out during visual search, thus expanding the group of species and brain structures that can cope with this fundamental visual behavior. The utility of our automatic analysis method is further discussed for other species and scientific questions.

  17. Testing the effectiveness of automated acoustic sensors for monitoring vocal activity of Marbled Murrelets Brachyramphus marmoratus

    USGS Publications Warehouse

    Cragg, Jenna L.; Burger, Alan E.; Piatt, John F.

    2015-01-01

    Cryptic nest sites and secretive breeding behavior make population estimates and monitoring of Marbled Murrelets Brachyramphus marmoratus difficult and expensive. Standard audio-visual and radar protocols have been refined but require intensive field time by trained personnel. We examined the detection range of automated sound recorders (Song Meters; Wildlife Acoustics Inc.) and the reliability of automated recognition models (“recognizers”) for identifying and quantifying Marbled Murrelet vocalizations during the 2011 and 2012 breeding seasons at Kodiak Island, Alaska. The detection range of murrelet calls by Song Meters was estimated to be 60 m. Recognizers detected 20 632 murrelet calls (keer and keheer) from a sample of 268 h of recordings, yielding 5 870 call series, which compared favorably with human scanning of spectrograms (on average detecting 95% of the number of call series identified by a human observer, but not necessarily the same call series). The false-negative rate (percentage of murrelet call series that the recognizers failed to detect) was 32%, mainly involving weak calls and short call series. False-positives (other sounds included by recognizers as murrelet calls) were primarily due to complex songs of other bird species, wind and rain. False-positives were lower in forest nesting habitat (48%) and highest in shrubby vegetation where calls of other birds were common (97%–99%). Acoustic recorders tracked spatial and seasonal trends in vocal activity, with higher call detections in high-quality forested habitat and during late July/early August. Automated acoustic monitoring of Marbled Murrelet calls could provide cost-effective, valuable information for assessing habitat use and temporal and spatial trends in nesting activity; reliability is dependent on careful placement of sensors to minimize false-positives and on prudent application of digital recognizers with visual checking of spectrograms.

  18. Design and synthesis of target-responsive hydrogel for portable visual quantitative detection of uranium with a microfluidic distance-based readout device.

    PubMed

    Huang, Yishun; Fang, Luting; Zhu, Zhi; Ma, Yanli; Zhou, Leiji; Chen, Xi; Xu, Dunming; Yang, Chaoyong

    2016-11-15

    Due to uranium's increasing exploitation in nuclear energy and its toxicity to human health, it is of great significance to detect uranium contamination. In particular, development of a rapid, sensitive and portable method is important for personal health care for those who frequently come into contact with uranium ore mining or who investigate leaks at nuclear power plants. The most stable form of uranium in water is uranyl ion (UO2(2+)). In this work, a UO2(2+) responsive smart hydrogel was designed and synthesized for rapid, portable, sensitive detection of UO2(2+). A UO2(2+) dependent DNAzyme complex composed of substrate strand and enzyme strand was utilized to crosslink DNA-grafted polyacrylamide chains to form a DNA hydrogel. Colorimetric analysis was achieved by encapsulating gold nanoparticles (AuNPs) in the DNAzyme-crosslinked hydrogel to indicate the concentration of UO2(2+). Without UO2(2+), the enzyme strand is not active. The presence of UO2(2+) in the sample activates the enzyme strand and triggers the cleavage of the substrate strand from the enzyme strand, thereby decreasing the density of crosslinkers and destabilizing the hydrogel, which then releases the encapsulated AuNPs. As low as 100nM UO2(2+) was visually detected by the naked eye. The target-responsive hydrogel was also demonstrated to be applicable in natural water spiked with UO2(2+). Furthermore, to avoid the visual errors caused by naked eye observation, a previously developed volumetric bar-chart chip (V-Chip) was used to quantitatively detect UO2(2+) concentrations in water by encapsulating Au-Pt nanoparticles in the hydrogel. The UO2(2+) concentrations were visually quantified from the travelling distance of ink-bar on the V-Chip. The method can be used for portable and quantitative detection of uranium in field applications without skilled operators and sophisticated instruments. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. To See or Not to See: Investigating Detectability of Ganges River Dolphins Using a Combined Visual-Acoustic Survey

    PubMed Central

    Richman, Nadia I.; Gibbons, James M.; Turvey, Samuel T.; Akamatsu, Tomonari; Ahmed, Benazir; Mahabub, Emile; Smith, Brian D.; Jones, Julia P. G.

    2014-01-01

    Detection of animals during visual surveys is rarely perfect or constant, and failure to account for imperfect detectability affects the accuracy of abundance estimates. Freshwater cetaceans are among the most threatened group of mammals, and visual surveys are a commonly employed method for estimating population size despite concerns over imperfect and unquantified detectability. We used a combined visual-acoustic survey to estimate detectability of Ganges River dolphins (Platanista gangetica gangetica) in four waterways of southern Bangladesh. The combined visual-acoustic survey resulted in consistently higher detectability than a single observer-team visual survey, thereby improving power to detect trends. Visual detectability was particularly low for dolphins close to meanders where these habitat features temporarily block the view of the preceding river surface. This systematic bias in detectability during visual-only surveys may lead researchers to underestimate the importance of heavily meandering river reaches. Although the benefits of acoustic surveys are increasingly recognised for marine cetaceans, they have not been widely used for monitoring abundance of freshwater cetaceans due to perceived costs and technical skill requirements. We show that acoustic surveys are in fact a relatively cost-effective approach for surveying freshwater cetaceans, once it is acknowledged that methods that do not account for imperfect detectability are of limited value for monitoring. PMID:24805782

  20. To see or not to see: investigating detectability of Ganges River dolphins using a combined visual-acoustic survey.

    PubMed

    Richman, Nadia I; Gibbons, James M; Turvey, Samuel T; Akamatsu, Tomonari; Ahmed, Benazir; Mahabub, Emile; Smith, Brian D; Jones, Julia P G

    2014-01-01

    Detection of animals during visual surveys is rarely perfect or constant, and failure to account for imperfect detectability affects the accuracy of abundance estimates. Freshwater cetaceans are among the most threatened group of mammals, and visual surveys are a commonly employed method for estimating population size despite concerns over imperfect and unquantified detectability. We used a combined visual-acoustic survey to estimate detectability of Ganges River dolphins (Platanista gangetica gangetica) in four waterways of southern Bangladesh. The combined visual-acoustic survey resulted in consistently higher detectability than a single observer-team visual survey, thereby improving power to detect trends. Visual detectability was particularly low for dolphins close to meanders where these habitat features temporarily block the view of the preceding river surface. This systematic bias in detectability during visual-only surveys may lead researchers to underestimate the importance of heavily meandering river reaches. Although the benefits of acoustic surveys are increasingly recognised for marine cetaceans, they have not been widely used for monitoring abundance of freshwater cetaceans due to perceived costs and technical skill requirements. We show that acoustic surveys are in fact a relatively cost-effective approach for surveying freshwater cetaceans, once it is acknowledged that methods that do not account for imperfect detectability are of limited value for monitoring.

  1. Is orbital volume associated with eyeball and visual cortex volume in humans?

    PubMed

    Pearce, Eiluned; Bridge, Holly

    2013-01-01

    In humans orbital volume increases linearly with absolute latitude. Scaling across mammals between visual system components suggests that these larger orbits should translate into larger eyes and visual cortices in high latitude humans. Larger eyes at high latitudes may be required to maintain adequate visual acuity and enhance visual sensitivity under lower light levels. To test the assumption that orbital volume can accurately index eyeball and visual cortex volumes specifically in humans. Structural Magnetic Resonance Imaging (MRI) techniques are employed to measure eye and orbit (n = 88) and brain and visual cortex (n = 99) volumes in living humans. Facial dimensions and foramen magnum area (a proxy for body mass) were also measured. A significant positive linear relationship was found between (i) orbital and eyeball volumes, (ii) eyeball and visual cortex grey matter volumes and (iii) different visual cortical areas, independently of overall brain volume. In humans the components of the visual system scale from orbit to eye to visual cortex volume independently of overall brain size. These findings indicate that orbit volume can index eye and visual cortex volume in humans, suggesting that larger high latitude orbits do translate into larger visual cortices.

  2. Is orbital volume associated with eyeball and visual cortex volume in humans?

    PubMed Central

    Pearce, Eiluned; Bridge, Holly

    2013-01-01

    Background In humans orbital volume increases linearly with absolute latitude. Scaling across mammals between visual system components suggests that these larger orbits should translate into larger eyes and visual cortices in high latitude humans. Larger eyes at high latitudes may be required to maintain adequate visual acuity and enhance visual sensitivity under lower light levels. Aim To test the assumption that orbital volume can accurately index eyeball and visual cortex volumes specifically in humans. Subjects & Methods Structural Magnetic Resonance Imaging (MRI) techniques are employed to measure eye and orbit (N=88), and brain and visual cortex (N=99) volumes in living humans. Facial dimensions and foramen magnum area (a proxy for body mass) were also measured. Results A significant positive linear relationship was found between (i) orbital and eyeball volumes, (ii) eyeball and visual cortex grey matter volumes, (iii) different visual cortical areas, independently of overall brain volume. Conclusion In humans the components of the visual system scale from orbit to eye to visual cortex volume independently of overall brain size. These findings indicate that orbit volume can index eye and visual cortex volume in humans, suggesting that larger high latitude orbits do translate into larger visual cortices. PMID:23879766

  3. The influence of spontaneous activity on stimulus processing in primary visual cortex.

    PubMed

    Schölvinck, M L; Friston, K J; Rees, G

    2012-02-01

    Spontaneous activity in the resting human brain has been studied extensively; however, how such activity affects the local processing of a sensory stimulus is relatively unknown. Here, we examined the impact of spontaneous activity in primary visual cortex on neuronal and behavioural responses to a simple visual stimulus, using functional MRI. Stimulus-evoked responses remained essentially unchanged by spontaneous fluctuations, combining with them in a largely linear fashion (i.e., with little evidence for an interaction). However, interactions between spontaneous fluctuations and stimulus-evoked responses were evident behaviourally; high levels of spontaneous activity tended to be associated with increased stimulus detection at perceptual threshold. Our results extend those found in studies of spontaneous fluctuations in motor cortex and higher order visual areas, and suggest a fundamental role for spontaneous activity in stimulus processing. Copyright © 2011. Published by Elsevier Inc.

  4. Cognitive Control Network Contributions to Memory-Guided Visual Attention

    PubMed Central

    Rosen, Maya L.; Stern, Chantal E.; Michalka, Samantha W.; Devaney, Kathryn J.; Somers, David C.

    2016-01-01

    Visual attentional capacity is severely limited, but humans excel in familiar visual contexts, in part because long-term memories guide efficient deployment of attention. To investigate the neural substrates that support memory-guided visual attention, we performed a set of functional MRI experiments that contrast long-term, memory-guided visuospatial attention with stimulus-guided visuospatial attention in a change detection task. Whereas the dorsal attention network was activated for both forms of attention, the cognitive control network (CCN) was preferentially activated during memory-guided attention. Three posterior nodes in the CCN, posterior precuneus, posterior callosal sulcus/mid-cingulate, and lateral intraparietal sulcus exhibited the greatest specificity for memory-guided attention. These 3 regions exhibit functional connectivity at rest, and we propose that they form a subnetwork within the broader CCN. Based on the task activation patterns, we conclude that the nodes of this subnetwork are preferentially recruited for long-term memory guidance of visuospatial attention. PMID:25750253

  5. Iodine-Mediated Etching of Gold Nanorods for Plasmonic ELISA Based on Colorimetric Detection of Alkaline Phosphatase.

    PubMed

    Zhang, Zhiyang; Chen, Zhaopeng; Wang, Shasha; Cheng, Fangbin; Chen, Lingxin

    2015-12-23

    Here, we propose a plasmonic enzyme-linked immunosorbent assay (ELISA) based on highly sensitive colorimetric detection of alkaline phosphatase (ALP), which is achieved by iodine-mediated etching of gold nanorods (AuNRs). Once the sandwich-type immunocomplex is formed, the ALP bound on the polystyrene microwells will hydrolyze ascorbic acid 2-phosphate into ascorbic acid. Subsequently, iodate is reduced to iodine, a moderate oxidant, which etches AuNRs from rod to sphere in shape. The shape change of AuNRs leads to a blue-shift of longitudinal localized surface plasmon resonance. As a result, the solution of AuNRs changes from blue to red. Benefiting from the highly sensitive detection of ALP, the proposed plasmonic ELISA has achieved an ultralow detection limit (100 pg/mL) for human immunoglobulin G (IgG). Importantly, the visual detection limit (3.0 ng/mL) allows the rapid differential diagnosis with the naked eye. The further detection of human IgG in fetal bovine serum indicates its applicability to the determination of low abundance protein in complex biological samples.

  6. Mapping the structure of perceptual and visual-motor abilities in healthy young adults.

    PubMed

    Wang, Lingling; Krasich, Kristina; Bel-Bahar, Tarik; Hughes, Lauren; Mitroff, Stephen R; Appelbaum, L Gregory

    2015-05-01

    The ability to quickly detect and respond to visual stimuli in the environment is critical to many human activities. While such perceptual and visual-motor skills are important in a myriad of contexts, considerable variability exists between individuals in these abilities. To better understand the sources of this variability, we assessed perceptual and visual-motor skills in a large sample of 230 healthy individuals via the Nike SPARQ Sensory Station, and compared variability in their behavioral performance to demographic, state, sleep and consumption characteristics. Dimension reduction and regression analyses indicated three underlying factors: Visual-Motor Control, Visual Sensitivity, and Eye Quickness, which accounted for roughly half of the overall population variance in performance on this battery. Inter-individual variability in Visual-Motor Control was correlated with gender and circadian patters such that performance on this factor was better for males and for those who had been awake for a longer period of time before assessment. The current findings indicate that abilities involving coordinated hand movements in response to stimuli are subject to greater individual variability, while visual sensitivity and occulomotor control are largely stable across individuals. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Two-color mixing for classifying agricultural products for safety and quality

    NASA Astrophysics Data System (ADS)

    Ding, Fujian; Chen, Yud-Ren; Chao, Kuanglin; Chan, Diane E.

    2006-02-01

    We show that the chromaticness of the visual signal that results from the two-color mixing achieved through an optically enhanced binocular device is directly related to the band ratio of light intensity at the two selected wavebands. A technique that implements the band-ratio criterion in a visual device by using two-color mixing is presented here. The device will allow inspectors to identify targets visually in accordance with a two-wavelength band ratio. It is a method of inspection by human vision assisted by an optical device, which offers greater flexibility and better cost savings than a multispectral machine vision system that implements the band-ratio criterion. With proper selection of the two narrow wavebands, discrimination by chromaticness that is directly related to the band ratio can work well. An example application of this technique for the inspection of carcasses chickens of afficted with various diseases is given. An optimal pair of wavelengths of 454 and 578 nm was selected to optimize differences in saturation and hue in CIE LUV color space among different types of target. Another example application, for the detection of chilling injury in cucumbers, is given, here the selected wavelength pair was 504 and 652 nm. The novel two-color mixing technique for visual inspection can be included in visual devices for various applications, ranging from target detection to food safety inspection.

  8. Nonlinear circuits for naturalistic visual motion estimation

    PubMed Central

    Fitzgerald, James E; Clark, Damon A

    2015-01-01

    Many animals use visual signals to estimate motion. Canonical models suppose that animals estimate motion by cross-correlating pairs of spatiotemporally separated visual signals, but recent experiments indicate that humans and flies perceive motion from higher-order correlations that signify motion in natural environments. Here we show how biologically plausible processing motifs in neural circuits could be tuned to extract this information. We emphasize how known aspects of Drosophila's visual circuitry could embody this tuning and predict fly behavior. We find that segregating motion signals into ON/OFF channels can enhance estimation accuracy by accounting for natural light/dark asymmetries. Furthermore, a diversity of inputs to motion detecting neurons can provide access to more complex higher-order correlations. Collectively, these results illustrate how non-canonical computations improve motion estimation with naturalistic inputs. This argues that the complexity of the fly's motion computations, implemented in its elaborate circuits, represents a valuable feature of its visual motion estimator. DOI: http://dx.doi.org/10.7554/eLife.09123.001 PMID:26499494

  9. Salience of the lambs: a test of the saliency map hypothesis with pictures of emotive objects.

    PubMed

    Humphrey, Katherine; Underwood, Geoffrey; Lambert, Tony

    2012-01-25

    Humans have an ability to rapidly detect emotive stimuli. However, many emotional objects in a scene are also highly visually salient, which raises the question of how dependent the effects of emotionality are on visual saliency and whether the presence of an emotional object changes the power of a more visually salient object in attracting attention. Participants were shown a set of positive, negative, and neutral pictures and completed recall and recognition memory tests. Eye movement data revealed that visual saliency does influence eye movements, but the effect is reliably reduced when an emotional object is present. Pictures containing negative objects were recognized more accurately and recalled in greater detail, and participants fixated more on negative objects than positive or neutral ones. Initial fixations were more likely to be on emotional objects than more visually salient neutral ones, suggesting that the processing of emotional features occurs at a very early stage of perception.

  10. The music of your emotions: neural substrates involved in detection of emotional correspondence between auditory and visual music actions.

    PubMed

    Petrini, Karin; Crabbe, Frances; Sheridan, Carol; Pollick, Frank E

    2011-04-29

    In humans, emotions from music serve important communicative roles. Despite a growing interest in the neural basis of music perception, action and emotion, the majority of previous studies in this area have focused on the auditory aspects of music performances. Here we investigate how the brain processes the emotions elicited by audiovisual music performances. We used event-related functional magnetic resonance imaging, and in Experiment 1 we defined the areas responding to audiovisual (musician's movements with music), visual (musician's movements only), and auditory emotional (music only) displays. Subsequently a region of interest analysis was performed to examine if any of the areas detected in Experiment 1 showed greater activation for emotionally mismatching performances (combining the musician's movements with mismatching emotional sound) than for emotionally matching music performances (combining the musician's movements with matching emotional sound) as presented in Experiment 2 to the same participants. The insula and the left thalamus were found to respond consistently to visual, auditory and audiovisual emotional information and to have increased activation for emotionally mismatching displays in comparison with emotionally matching displays. In contrast, the right thalamus was found to respond to audiovisual emotional displays and to have similar activation for emotionally matching and mismatching displays. These results suggest that the insula and left thalamus have an active role in detecting emotional correspondence between auditory and visual information during music performances, whereas the right thalamus has a different role.

  11. Ceci n'est pas une micromachine.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yarberry, Victor R.; Diegert, Carl F.

    2010-03-01

    The image created in reflected light DIC can often be interpreted as a true three-dimensional representation of the surface geometry, provided a clear distinction can be realized between raised and lowered regions in the specimen. It may be helpful if our definition of saliency embraces work on the human visual system (HVS) as well as the more abstract work on saliency, as it is certain that understanding by humans will always stand between recording of a useful signal from all manner of sensors and so-called actionable intelligence. A DARPA/DSO program lays down this requirement in a current program (Kruse 2010):more » The vision for the Neurotechnology for Intelligence Analysts (NIA) Program is to revolutionize the way that analysts handle intelligence imagery, increasing both the throughput of imagery to the analyst and overall accuracy of the assessments. Current computer-based target detection capabilities cannot process vast volumes of imagery with the speed, flexibility, and precision of the human visual system.« less

  12. Networks for image acquisition, processing and display

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.

    1990-01-01

    The human visual system comprises layers of networks which sample, process, and code images. Understanding these networks is a valuable means of understanding human vision and of designing autonomous vision systems based on network processing. Ames Research Center has an ongoing program to develop computational models of such networks. The models predict human performance in detection of targets and in discrimination of displayed information. In addition, the models are artificial vision systems sharing properties with biological vision that has been tuned by evolution for high performance. Properties include variable density sampling, noise immunity, multi-resolution coding, and fault-tolerance. The research stresses analysis of noise in visual networks, including sampling, photon, and processing unit noises. Specific accomplishments include: models of sampling array growth with variable density and irregularity comparable to that of the retinal cone mosaic; noise models of networks with signal-dependent and independent noise; models of network connection development for preserving spatial registration and interpolation; multi-resolution encoding models based on hexagonal arrays (HOP transform); and mathematical procedures for simplifying analysis of large networks.

  13. The effect of mood state on visual search times for detecting a target in noise: An application of smartphone technology

    PubMed Central

    Maekawa, Toru; de Brecht, Matthew; Yamagishi, Noriko

    2018-01-01

    The study of visual perception has largely been completed without regard to the influence that an individual’s emotional status may have on their performance in visual tasks. However, there is a growing body of evidence to suggest that mood may affect not only creative abilities and interpersonal skills but also the capacity to perform low-level cognitive tasks. Here, we sought to determine whether rudimentary visual search processes are similarly affected by emotion. Specifically, we examined whether an individual’s perceived happiness level affects their ability to detect a target in noise. To do so, we employed pop-out and serial visual search paradigms, implemented using a novel smartphone application that allowed search times and self-rated levels of happiness to be recorded throughout each twenty-four-hour period for two weeks. This experience sampling protocol circumvented the need to alter mood artificially with laboratory-based induction methods. Using our smartphone application, we were able to replicate the classic visual search findings, whereby pop-out search times remained largely unaffected by the number of distractors whereas serial search times increased with increasing number of distractors. While pop-out search times were unaffected by happiness level, serial search times with the maximum numbers of distractors (n = 30) were significantly faster for high happiness levels than low happiness levels (p = 0.02). Our results demonstrate the utility of smartphone applications in assessing ecologically valid measures of human visual performance. We discuss the significance of our findings for the assessment of basic visual functions using search time measures, and for our ability to search effectively for targets in real world settings. PMID:29664952

  14. The effect of mood state on visual search times for detecting a target in noise: An application of smartphone technology.

    PubMed

    Maekawa, Toru; Anderson, Stephen J; de Brecht, Matthew; Yamagishi, Noriko

    2018-01-01

    The study of visual perception has largely been completed without regard to the influence that an individual's emotional status may have on their performance in visual tasks. However, there is a growing body of evidence to suggest that mood may affect not only creative abilities and interpersonal skills but also the capacity to perform low-level cognitive tasks. Here, we sought to determine whether rudimentary visual search processes are similarly affected by emotion. Specifically, we examined whether an individual's perceived happiness level affects their ability to detect a target in noise. To do so, we employed pop-out and serial visual search paradigms, implemented using a novel smartphone application that allowed search times and self-rated levels of happiness to be recorded throughout each twenty-four-hour period for two weeks. This experience sampling protocol circumvented the need to alter mood artificially with laboratory-based induction methods. Using our smartphone application, we were able to replicate the classic visual search findings, whereby pop-out search times remained largely unaffected by the number of distractors whereas serial search times increased with increasing number of distractors. While pop-out search times were unaffected by happiness level, serial search times with the maximum numbers of distractors (n = 30) were significantly faster for high happiness levels than low happiness levels (p = 0.02). Our results demonstrate the utility of smartphone applications in assessing ecologically valid measures of human visual performance. We discuss the significance of our findings for the assessment of basic visual functions using search time measures, and for our ability to search effectively for targets in real world settings.

  15. Ocular Changes in TgF344-AD Rat Model of Alzheimer's Disease

    PubMed Central

    Tsai, Yuchun; Lu, Bin; Ljubimov, Alexander V.; Girman, Sergey; Ross-Cisneros, Fred N.; Sadun, Alfredo A.; Svendsen, Clive N.; Cohen, Robert M.; Wang, Shaomei

    2014-01-01

    Purpose. Alzheimer's disease (AD) is the most common neurodegenerative disorder characterized by progressive decline in learning, memory, and executive functions. In addition to cognitive and behavioral deficits, vision disturbances have been reported in early stage of AD, well before the diagnosis is clearly established. To further investigate ocular abnormalities, a novel AD transgenic rat model was analyzed. Methods. Transgenic (Tg) rats (TgF344-AD) heterozygous for human mutant APPswe/PS1ΔE9 and age-matched wild type (WT) rats, as well as 20 human postmortem retinal samples from both AD and healthy donors were used. Visual function in the rodent was analyzed using the optokinetic response. Immunohistochemistry on retinal and brain sections was used to detect various markers including amyloid-β (Aβ) plaques. Results. As expected, Aβ plaques were detected in the hippocampus, cortex, and retina of Tg rats. Plaque-like structures were also found in two AD human whole-mount retinas. The choroidal thickness was significantly reduced in both Tg rat and in AD human eyes when compared with age-matched controls. Tg rat eyes also showed hypertrophic retinal pigment epithelial cells, inflammatory cells, and upregulation of complement factor C3. Although visual acuity was lower in Tg than in WT rats, there was no significant difference in the retinal ganglion cell number and retinal vasculature. Conclusions. Further studies are needed to elucidate the significance and mechanisms of this pathological change and luminance threshold recording from the superior colliculus. PMID:24398104

  16. The impact of privacy protection filters on gender recognition

    NASA Astrophysics Data System (ADS)

    Ruchaud, Natacha; Antipov, Grigory; Korshunov, Pavel; Dugelay, Jean-Luc; Ebrahimi, Touradj; Berrani, Sid-Ahmed

    2015-09-01

    Deep learning-based algorithms have become increasingly efficient in recognition and detection tasks, especially when they are trained on large-scale datasets. Such recent success has led to a speculation that deep learning methods are comparable to or even outperform human visual system in its ability to detect and recognize objects and their features. In this paper, we focus on the specific task of gender recognition in images when they have been processed by privacy protection filters (e.g., blurring, masking, and pixelization) applied at different strengths. Assuming a privacy protection scenario, we compare the performance of state of the art deep learning algorithms with a subjective evaluation obtained via crowdsourcing to understand how privacy protection filters affect both machine and human vision.

  17. User-assisted visual search and tracking across distributed multi-camera networks

    NASA Astrophysics Data System (ADS)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  18. The role of human ventral visual cortex in motion perception

    PubMed Central

    Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene

    2013-01-01

    Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030

  19. Effect of inherent location uncertainty on detection of stationary targets in noisy image sequences.

    PubMed

    Manjeshwar, R M; Wilson, D L

    2001-01-01

    The effect of inherent location uncertainty on the detection of stationary targets was determined in noisy image sequences. Targets were thick and thin projected cylinders mimicking arteries, catheters, and guide wires in medical imaging x-ray fluoroscopy. With the use of an adaptive forced-choice method, detection contrast sensitivity (the inverse of contrast) was measured both with and without marker cues that directed the attention of observers to the target location. With the probability correct clamped at 80%, contrast sensitivity increased an average of 77% when the marker was added to the thin-cylinder target. There was an insignificant effect on the thick cylinder. The large enhancement with the thin cylinder was obtained even though the target was located exactly in the center of a small panel, giving observers the impression that it was well localized. Psychometric functions consisting of d' plotted as a function of the square root of the signal-energy-to-noise-ratio gave a positive x intercept for the case of the thin cylinder without a marker. This x intercept, characteristic of uncertainty in other types of detection experiments, disappeared when the marker was added or when the thick cylinder was used. Inherent location uncertainty was further characterized by using four different markers with varying proximity to the target. Visual detection by human observers increased monotonically as the markers better localized the target. Human performance was modeled as a matched-filter detector with an uncertainty in the placement of the template. The removal of a location cue was modeled by introducing a location uncertainty of approximately equals 0.4 mm on the display device or only 7 microm on the retina, a size on the order of a single photoreceptor field. We conclude that detection is affected by target location uncertainty on the order of cellular dimensions, an observation with important implications for detection mechanisms in humans. In medical imaging, the results argue strongly for inclusion of high-contrast visualization markers on catheters and other interventional devices.

  20. Do you see what I hear: experiments in multi-channel sound and 3D visualization for network monitoring?

    NASA Astrophysics Data System (ADS)

    Ballora, Mark; Hall, David L.

    2010-04-01

    Detection of intrusions is a continuing problem in network security. Due to the large volumes of data recorded in Web server logs, analysis is typically forensic, taking place only after a problem has occurred. This paper describes a novel method of representing Web log information through multi-channel sound, while simultaneously visualizing network activity using a 3-D immersive environment. We are exploring the detection of intrusion signatures and patterns, utilizing human aural and visual pattern recognition ability to detect intrusions as they occur. IP addresses and return codes are mapped to an informative and unobtrusive listening environment to act as a situational sound track of Web traffic. Web log data is parsed and formatted using Python, then read as a data array by the synthesis language SuperCollider [1], which renders it as a sonification. This can be done either for the study of pre-existing data sets or in monitoring Web traffic in real time. Components rendered aurally include IP address, geographical information, and server Return Codes. Users can interact with the data, speeding or slowing the speed of representation (for pre-existing data sets) or "mixing" sound components to optimize intelligibility for tracking suspicious activity.

  1. Fluorescence-guided surgery for cancer patients: a proof of concept study on human xenografts in mice and spontaneous tumors in pets

    PubMed Central

    Mery, Eliane; Golzio, Muriel; Guillermet, Stephanie; Lanore, Didier; Naour, Augustin Le; Thibault, Benoît; Tilkin-Mariamé, Anne Françoise; Bellard, Elizabeth; Delord, Jean Pierre; Querleu, Denis; Ferron, Gwenael; Couderc, Bettina

    2017-01-01

    Surgery is often the first treatment option for patients with cancer. Patient survival essentially depends on the completeness of tumor resection. This is a major challenge, particularly in cases of peritoneal carcinomatosis, where tumors are widely disseminated in the large peritoneal cavity. Any development to help surgeons visualize these residual cells would improve the completeness of the surgery. For non-disseminated tumors, imaging could be used to ensure that the tumor margins and the draining lymph nodes are free of tumor deposits. Near-infrared fluorescence imaging has been shown to be one of the most convenient imaging modalities. Our aim was to evaluate the efficacy of a near-infrared fluorescent probe targeting the αvβ3 integrins (Angiostamp™) for intraoperative detection of tumors using the Fluobeam® device. We determined whether different human tumor nodules from various origins could be detected in xenograft mouse models using both cancer cell lines and patient-derived tumor cells. We found that xenografts could be imaged by fluorescent staining irrespective of their integrin expression levels. This suggests imaging of the associated angiogenesis of the tumor and a broader potential utilization of Angiostamp™. We therefore performed a veterinary clinical trial in cats and dogs with local tumors or with spontaneous disseminated peritoneal carcinomatosis. Our results demonstrate that the probe can specifically visualize both breast and ovarian nodules, and suggest that Angiostamp™ is a powerful fluorescent contrast agent that could be used in both human and veterinary clinical trials for intraoperative detection of tumors. PMID:29312629

  2. In vivo detection of small tumour lesions by multi-pinhole SPECT applying a 99mTc-labelled nanobody targeting the Epidermal Growth Factor Receptor

    PubMed Central

    Krüwel, Thomas; Nevoltris, Damien; Bode, Julia; Dullin, Christian; Baty, Daniel; Chames, Patrick; Alves, Frauke

    2016-01-01

    The detection of tumours in an early phase of tumour development in combination with the knowledge of expression of tumour markers such as epidermal growth factor receptor (EGFR) is an important prerequisite for clinical decisions. In this study we applied the anti-EGFR nanobody 99mTc-D10 for visualizing small tumour lesions with volumes below 100 mm3 by targeting EGFR in orthotopic human mammary MDA-MB-468 and MDA-MB-231 and subcutaneous human epidermoid A431 carcinoma mouse models. Use of nanobody 99mTc-D10 of a size as small as 15.5 kDa enables detection of tumours by single photon emission computed tomography (SPECT) imaging already 45 min post intravenous administration with high tumour uptake (>3% ID/g) in small MDA-MB-468 and A431 tumours, with tumour volumes of 52.5 mm3 ± 21.2 and 26.6 mm3 ± 16.7, respectively. Fast blood clearance with a serum half-life of 4.9 min resulted in high in vivo contrast and ex vivo tumour to blood and tissue ratios. In contrast, no accumulation of 99mTc-D10 in MDA-MB-231 tumours characterized by a very low expression of EGFR was observed. Here we present specific and high contrast in vivo visualization of small human tumours overexpressing EGFR by preclinical multi-pinhole SPECT shortly after administration of anti-EGFR nanobody 99mTc-D10. PMID:26912069

  3. Dimension-based attention in visual short-term memory.

    PubMed

    Pilling, Michael; Barrett, Doug J K

    2016-07-01

    We investigated how dimension-based attention influences visual short-term memory (VSTM). This was done through examining the effects of cueing a feature dimension in two perceptual comparison tasks (change detection and sameness detection). In both tasks, a memory array and a test array consisting of a number of colored shapes were presented successively, interleaved by a blank interstimulus interval (ISI). In Experiment 1 (change detection), the critical event was a feature change in one item across the memory and test arrays. In Experiment 2 (sameness detection), the critical event was the absence of a feature change in one item across the two arrays. Auditory cues indicated the feature dimension (color or shape) of the critical event with 80 % validity; the cues were presented either prior to the memory array, during the ISI, or simultaneously with the test array. In Experiment 1, the cue validity influenced sensitivity only when the cue was given at the earliest position; in Experiment 2, the cue validity influenced sensitivity at all three cue positions. We attributed the greater effectiveness of top-down guidance by cues in the sameness detection task to the more active nature of the comparison process required to detect sameness events (Hyun, Woodman, Vogel, Hollingworth, & Luck, Journal of Experimental Psychology: Human Perception and Performance, 35; 1140-1160, 2009).

  4. Detection of immunocytological markers in photomicroscopic images

    NASA Astrophysics Data System (ADS)

    Friedrich, David; zur Jacobsmühlen, Joschka; Braunschweig, Till; Bell, André; Chaisaowong, Kraisorn; Knüchel-Clarke, Ruth; Aach, Til

    2012-03-01

    Early detection of cervical cancer can be achieved through visual analysis of cell anomalies. The established PAP smear achieves a sensitivity of 50-90%, most false negative results are caused by mistakes in the preparation of the specimen or reader variability in the subjective, visual investigation. Since cervical cancer is caused by human papillomavirus (HPV), the detection of HPV-infected cells opens new perspectives for screening of precancerous abnormalities. Immunocytochemical preparation marks HPV-positive cells in brush smears of the cervix with high sensitivity and specificity. The goal of this work is the automated detection of all marker-positive cells in microscopic images of a sample slide stained with an immunocytochemical marker. A color separation technique is used to estimate the concentrations of the immunocytochemical marker stain as well as of the counterstain used to color the nuclei. Segmentation methods based on Otsu's threshold selection method and Mean Shift are adapted to the task of segmenting marker-positive cells and their nuclei. The best detection performance of single marker-positive cells was achieved with the adapted thresholding method with a sensitivity of 95.9%. The contours differed by a modified Hausdorff Distance (MHD) of 2.8 μm. Nuclei of single marker positive cells were detected with a sensitivity of 95.9% and MHD = 1.02 μm.

  5. Detection of Nonverbal Synchronization through Phase Difference in Human Communication

    PubMed Central

    Kwon, Jinhwan; Ogawa, Ken-ichiro; Ono, Eisuke; Miyake, Yoshihiro

    2015-01-01

    Nonverbal communication is an important factor in human communication, and body movement synchronization in particular is an important part of nonverbal communication. Some researchers have analyzed body movement synchronization by focusing on changes in the amplitude of body movements. However, the definition of “body movement synchronization” is still unclear. From a theoretical viewpoint, phase difference is the most important factor in synchronization analysis. Therefore, there is a need to measure the synchronization of body movements using phase difference. The purpose of this study was to provide a quantitative definition of the phase difference distribution for detecting body movement synchronization in human communication. The phase difference distribution was characterized using four statistical measurements: density, mean phase difference, standard deviation (SD) and kurtosis. To confirm the effectiveness of our definition, we applied it to human communication in which the roles of speaker and listener were defined. Specifically, we examined the difference in the phase difference distribution between two different communication situations: face-to-face communication with visual interaction and remote communication with unidirectional visual perception. Participant pairs performed a task supposing lecture in the face-to-face communication condition and in the remote communication condition via television. Throughout the lecture task, we extracted a set of phase differences from the time-series data of the acceleration norm of head nodding motions between two participants. Statistical analyses of the phase difference distribution revealed the characteristics of head nodding synchronization. Although the mean phase differences in synchronized head nods did not differ significantly between the conditions, there were significant differences in the densities, the SDs and the kurtoses of the phase difference distributions of synchronized head nods. These results show the difference in nonverbal synchronization between different communication types. Our study indicates that the phase difference distribution is useful in detecting nonverbal synchronization in various human communication situations. PMID:26208100

  6. Detection of Nonverbal Synchronization through Phase Difference in Human Communication.

    PubMed

    Kwon, Jinhwan; Ogawa, Ken-ichiro; Ono, Eisuke; Miyake, Yoshihiro

    2015-01-01

    Nonverbal communication is an important factor in human communication, and body movement synchronization in particular is an important part of nonverbal communication. Some researchers have analyzed body movement synchronization by focusing on changes in the amplitude of body movements. However, the definition of "body movement synchronization" is still unclear. From a theoretical viewpoint, phase difference is the most important factor in synchronization analysis. Therefore, there is a need to measure the synchronization of body movements using phase difference. The purpose of this study was to provide a quantitative definition of the phase difference distribution for detecting body movement synchronization in human communication. The phase difference distribution was characterized using four statistical measurements: density, mean phase difference, standard deviation (SD) and kurtosis. To confirm the effectiveness of our definition, we applied it to human communication in which the roles of speaker and listener were defined. Specifically, we examined the difference in the phase difference distribution between two different communication situations: face-to-face communication with visual interaction and remote communication with unidirectional visual perception. Participant pairs performed a task supposing lecture in the face-to-face communication condition and in the remote communication condition via television. Throughout the lecture task, we extracted a set of phase differences from the time-series data of the acceleration norm of head nodding motions between two participants. Statistical analyses of the phase difference distribution revealed the characteristics of head nodding synchronization. Although the mean phase differences in synchronized head nods did not differ significantly between the conditions, there were significant differences in the densities, the SDs and the kurtoses of the phase difference distributions of synchronized head nods. These results show the difference in nonverbal synchronization between different communication types. Our study indicates that the phase difference distribution is useful in detecting nonverbal synchronization in various human communication situations.

  7. Explaining neural signals in human visual cortex with an associative learning model.

    PubMed

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  8. A novel approach for automatic visualization and activation detection of evoked potentials induced by epidural spinal cord stimulation in individuals with spinal cord injury.

    PubMed

    Mesbah, Samineh; Angeli, Claudia A; Keynton, Robert S; El-Baz, Ayman; Harkema, Susan J

    2017-01-01

    Voluntary movements and the standing of spinal cord injured patients have been facilitated using lumbosacral spinal cord epidural stimulation (scES). Identifying the appropriate stimulation parameters (intensity, frequency and anode/cathode assignment) is an arduous task and requires extensive mapping of the spinal cord using evoked potentials. Effective visualization and detection of muscle evoked potentials induced by scES from the recorded electromyography (EMG) signals is critical to identify the optimal configurations and the effects of specific scES parameters on muscle activation. The purpose of this work was to develop a novel approach to automatically detect the occurrence of evoked potentials, quantify the attributes of the signal and visualize the effects across a high number of scES parameters. This new method is designed to automate the current process for performing this task, which has been accomplished manually by data analysts through observation of raw EMG signals, a process that is laborious and time-consuming as well as prone to human errors. The proposed method provides a fast and accurate five-step algorithms framework for activation detection and visualization of the results including: conversion of the EMG signal into its 2-D representation by overlaying the located signal building blocks; de-noising the 2-D image by applying the Generalized Gaussian Markov Random Field technique; detection of the occurrence of evoked potentials using a statistically optimal decision method through the comparison of the probability density functions of each segment to the background noise utilizing log-likelihood ratio; feature extraction of detected motor units such as peak-to-peak amplitude, latency, integrated EMG and Min-max time intervals; and finally visualization of the outputs as Colormap images. In comparing the automatic method vs. manual detection on 700 EMG signals from five individuals, the new approach decreased the processing time from several hours to less than 15 seconds for each set of data, and demonstrated an average accuracy of 98.28% based on the combined false positive and false negative error rates. The sensitivity of this method to the signal-to-noise ratio (SNR) was tested using simulated EMG signals and compared to two existing methods, where the novel technique showed much lower sensitivity to the SNR.

  9. A novel approach for automatic visualization and activation detection of evoked potentials induced by epidural spinal cord stimulation in individuals with spinal cord injury

    PubMed Central

    Mesbah, Samineh; Angeli, Claudia A.; Keynton, Robert S.; Harkema, Susan J.

    2017-01-01

    Voluntary movements and the standing of spinal cord injured patients have been facilitated using lumbosacral spinal cord epidural stimulation (scES). Identifying the appropriate stimulation parameters (intensity, frequency and anode/cathode assignment) is an arduous task and requires extensive mapping of the spinal cord using evoked potentials. Effective visualization and detection of muscle evoked potentials induced by scES from the recorded electromyography (EMG) signals is critical to identify the optimal configurations and the effects of specific scES parameters on muscle activation. The purpose of this work was to develop a novel approach to automatically detect the occurrence of evoked potentials, quantify the attributes of the signal and visualize the effects across a high number of scES parameters. This new method is designed to automate the current process for performing this task, which has been accomplished manually by data analysts through observation of raw EMG signals, a process that is laborious and time-consuming as well as prone to human errors. The proposed method provides a fast and accurate five-step algorithms framework for activation detection and visualization of the results including: conversion of the EMG signal into its 2-D representation by overlaying the located signal building blocks; de-noising the 2-D image by applying the Generalized Gaussian Markov Random Field technique; detection of the occurrence of evoked potentials using a statistically optimal decision method through the comparison of the probability density functions of each segment to the background noise utilizing log-likelihood ratio; feature extraction of detected motor units such as peak-to-peak amplitude, latency, integrated EMG and Min-max time intervals; and finally visualization of the outputs as Colormap images. In comparing the automatic method vs. manual detection on 700 EMG signals from five individuals, the new approach decreased the processing time from several hours to less than 15 seconds for each set of data, and demonstrated an average accuracy of 98.28% based on the combined false positive and false negative error rates. The sensitivity of this method to the signal-to-noise ratio (SNR) was tested using simulated EMG signals and compared to two existing methods, where the novel technique showed much lower sensitivity to the SNR. PMID:29020054

  10. Optical hiding with visual cryptography

    NASA Astrophysics Data System (ADS)

    Shi, Yishi; Yang, Xiubo

    2017-11-01

    We propose an optical hiding method based on visual cryptography. In the hiding process, we convert the secret information into a set of fabricated phase-keys, which are completely independent of each other, intensity-detected-proof and image-covered, leading to the high security. During the extraction process, the covered phase-keys are illuminated with laser beams and then incoherently superimposed to extract the hidden information directly by human vision, without complicated optical implementations and any additional computation, resulting in the convenience of extraction. Also, the phase-keys are manufactured as the diffractive optical elements that are robust to the attacks, such as the blocking and the phase-noise. Optical experiments verify that the high security, the easy extraction and the strong robustness are all obtainable in the visual-cryptography-based optical hiding.

  11. Eye-movements and Voice as Interface Modalities to Computer Systems

    NASA Astrophysics Data System (ADS)

    Farid, Mohsen M.; Murtagh, Fionn D.

    2003-03-01

    We investigate the visual and vocal modalities of interaction with computer systems. We focus our attention on the integration of visual and vocal interface as possible replacement and/or additional modalities to enhance human-computer interaction. We present a new framework for employing eye gaze as a modality of interface. While voice commands, as means of interaction with computers, have been around for a number of years, integration of both the vocal interface and the visual interface, in terms of detecting user's eye movements through an eye-tracking device, is novel and promises to open the horizons for new applications where a hand-mouse interface provides little or no apparent support to the task to be accomplished. We present an array of applications to illustrate the new framework and eye-voice integration.

  12. Hybridization-based detection of Helicobacter pylori at human body temperature using advanced locked nucleic acid (LNA) probes.

    PubMed

    Fontenete, Sílvia; Guimarães, Nuno; Leite, Marina; Figueiredo, Céu; Wengel, Jesper; Filipe Azevedo, Nuno

    2013-01-01

    The understanding of the human microbiome and its influence upon human life has long been a subject of study. Hence, methods that allow the direct detection and visualization of microorganisms and microbial consortia (e.g. biofilms) within the human body would be invaluable. In here, we assessed the possibility of developing a variant of fluorescence in situ hybridization (FISH), named fluorescence in vivo hybridization (FIVH), for the detection of Helicobacter pylori. Using oligonucleotide variations comprising locked nucleic acids (LNA) and 2'-O-methyl RNAs (2'OMe) with two types of backbone linkages (phosphate or phosphorothioate), we were able to successfully identify two probes that hybridize at 37 °C with high specificity and sensitivity for H. pylori, both in pure cultures and in gastric biopsies. Furthermore, the use of this type of probes implied that toxic compounds typically used in FISH were either found to be unnecessary or could be replaced by a non-toxic substitute. We show here for the first time that the use of advanced LNA probes in FIVH conditions provides an accurate, simple and fast method for H. pylori detection and location, which could be used in the future for potential in vivo applications either for this microorganism or for others.

  13. Hybridization-Based Detection of Helicobacter pylori at Human Body Temperature Using Advanced Locked Nucleic Acid (LNA) Probes

    PubMed Central

    Fontenete, Sílvia; Guimarães, Nuno; Leite, Marina; Figueiredo, Céu; Wengel, Jesper; Filipe Azevedo, Nuno

    2013-01-01

    The understanding of the human microbiome and its influence upon human life has long been a subject of study. Hence, methods that allow the direct detection and visualization of microorganisms and microbial consortia (e.g. biofilms) within the human body would be invaluable. In here, we assessed the possibility of developing a variant of fluorescence in situ hybridization (FISH), named fluorescence in vivo hybridization (FIVH), for the detection of Helicobacter pylori. Using oligonucleotide variations comprising locked nucleic acids (LNA) and 2’-O-methyl RNAs (2’OMe) with two types of backbone linkages (phosphate or phosphorothioate), we were able to successfully identify two probes that hybridize at 37 °C with high specificity and sensitivity for H. pylori, both in pure cultures and in gastric biopsies. Furthermore, the use of this type of probes implied that toxic compounds typically used in FISH were either found to be unnecessary or could be replaced by a non-toxic substitute. We show here for the first time that the use of advanced LNA probes in FIVH conditions provides an accurate, simple and fast method for H. pylori detection and location, which could be used in the future for potential in vivo applications either for this microorganism or for others. PMID:24278398

  14. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  15. Low-dose intrathecal fluorescein for diagnosis of cerebrospinal fluid rhinorrhea using the scanning fiber endoscope in the human nasal cavities

    NASA Astrophysics Data System (ADS)

    Hou, Vivian W.; Davis, Calvin G.; Davis, Greg E.; Seibel, Eric J.

    2016-03-01

    Intrathecal fluorescein (ITF) enhances detection of cerebrospinal fluid rhinorrhea (CSFR). Clinically administered doses fall in the range of 0.1ml to 0.5ml of 5% to 10% fluorescein (1.3×10-3M to 1.3×10-2M). Though uncommon, significant morbidities associated with high doses of fluorescein have been reported. High concentrations are necessary for white light visual assessment; in contrast, fluorescent imaging enhances signal contrast and requires lower ITF concentrations for visualization. The ultrathin and flexible, multimodal scanning fiber endoscope (SFE) can visualize nanomolar concentrations of fluorescein as pseudocolor over reflectance, video-rate imaging. The application of the SFE for CSFR detection was assessed in a cadaver study. Briefly, 10μM (1×10-5M) fluorescein, 100X-1000X less than the standard clinical dose, was injected intra-cranially into the epidural space through an orbital roof puncture. The resulting rhinorrhea was assessed with a conventional, rigid ENT scope and second with the SFE in both video reflectance and multimodal fluorescent imaging modes. Neither system could visualize the 10μM ITF during white light imaging however the nanomolar sensitive SFE visualized the rhinorrhea during fluorescent imaging. Despite the low concentration used, a target-to-background ratio of 5.6 +/- 2.7 was achieved. To demonstrate SFE guidance of CSFR detection and repair, de-identified patient computed tomography (CT) scans were used to generate 3D printed phantoms. Cases were selected for unique anatomical features and overall clinical difficulty as determined by an experienced ENT clinician (GED). The sensitivity and minimally invasive nature of the SFE provide a unique platform for enhancing diagnosis and monitoring interventions in surgical endoscopic approaches into the sinuses.

  16. Brain network involved in visual processing of movement stimuli used in upper limb robotic training: an fMRI study.

    PubMed

    Nocchi, Federico; Gazzellini, Simone; Grisolia, Carmela; Petrarca, Maurizio; Cannatà, Vittorio; Cappa, Paolo; D'Alessio, Tommaso; Castelli, Enrico

    2012-07-24

    The potential of robot-mediated therapy and virtual reality in neurorehabilitation is becoming of increasing importance. However, there is limited information, using neuroimaging, on the neural networks involved in training with these technologies. This study was intended to detect the brain network involved in the visual processing of movement during robotic training. The main aim was to investigate the existence of a common cerebral network able to assimilate biological (human upper limb) and non-biological (abstract object) movements, hence testing the suitability of the visual non-biological feedback provided by the InMotion2 Robot. A visual functional Magnetic Resonance Imaging (fMRI) task was administered to 22 healthy subjects. The task required observation and retrieval of motor gestures and of the visual feedback used in robotic training. Functional activations of both biological and non-biological movements were examined to identify areas activated in both conditions, along with differential activity in upper limb vs. abstract object trials. Control of response was also tested by administering trials with congruent and incongruent reaching movements. The observation of upper limb and abstract object movements elicited similar patterns of activations according to a caudo-rostral pathway for the visual processing of movements (including specific areas of the occipital, temporal, parietal, and frontal lobes). Similarly, overlapping activations were found for the subsequent retrieval of the observed movement. Furthermore, activations of frontal cortical areas were associated with congruent trials more than with the incongruent ones. This study identified the neural pathway associated with visual processing of movement stimuli used in upper limb robot-mediated training and investigated the brain's ability to assimilate abstract object movements with human motor gestures. In both conditions, activations were elicited in cerebral areas involved in visual perception, sensory integration, recognition of movement, re-mapping on the somatosensory and motor cortex, storage in memory, and response control. Results from the congruent vs. incongruent trials revealed greater activity for the former condition than the latter in a network including cingulate cortex, right inferior and middle frontal gyrus that are involved in the go-signal and in decision control. Results on healthy subjects would suggest the appropriateness of an abstract visual feedback provided during motor training. The task contributes to highlight the potential of fMRI in improving the understanding of visual motor processes and may also be useful in detecting brain reorganisation during training.

  17. A Study on Analysis of EEG Caused by Grating Stimulation Imaging

    NASA Astrophysics Data System (ADS)

    Urakawa, Hiroshi; Nishimura, Toshihiro; Tsubai, Masayoshi; Itoh, Kenji

    Recently, many researchers have studied a visual perception. Focus is attended to studies of the visual perception phenomenon by using the grating stimulation images. The previous researches have suggested that a subset of retinal ganglion cells responds to motion in the receptive field center, but only if the wider surround moves with a different trajectory. We discuss the function of human retina, and measure and analysis EEG(electroencephalography) of a normal subject who looks on grating stimulation images. We confirmed the visual perception of human by EEG signal analysis. We also have obtained that a sinusoidal grating stimulation was given, asymmetry was observed the α wave element in EEG of the symmetric part in a left hemisphere and a right hemisphere of the brain. Therefore, it is presumed that projected image is even when the still picture is seen and the image projected onto retinas of right and left eyes is not even for the dynamic scene. It evaluated it by taking the envelope curve for the detected α wave, and using the average and standard deviation.

  18. Perceptual Contrast Enhancement with Dynamic Range Adjustment

    PubMed Central

    Zhang, Hong; Li, Yuecheng; Chen, Hao; Yuan, Ding; Sun, Mingui

    2013-01-01

    Recent years, although great efforts have been made to improve its performance, few Histogram equalization (HE) methods take human visual perception (HVP) into account explicitly. The human visual system (HVS) is more sensitive to edges than brightness. This paper proposes to take use of this nature intuitively and develops a perceptual contrast enhancement approach with dynamic range adjustment through histogram modification. The use of perceptual contrast connects the image enhancement problem with the HVS. To pre-condition the input image before the HE procedure is implemented, a perceptual contrast map (PCM) is constructed based on the modified Difference of Gaussian (DOG) algorithm. As a result, the contrast of the image is sharpened and high frequency noise is suppressed. A modified Clipped Histogram Equalization (CHE) is also developed which improves visual quality by automatically detecting the dynamic range of the image with improved perceptual contrast. Experimental results show that the new HE algorithm outperforms several state-of-the-art algorithms in improving perceptual contrast and enhancing details. In addition, the new algorithm is simple to implement, making it suitable for real-time applications. PMID:24339452

  19. [Accuracy of computer aided measurement for detecting dental proximal caries lesions in images of cone-beam computed tomography].

    PubMed

    Zhang, Z L; Li, J P; Li, G; Ma, X C

    2017-02-09

    Objective: To establish and validate a computer program used to aid the detection of dental proximal caries in the images cone beam computed tomography (CBCT) images. Methods: According to the characteristics of caries lesions in X-ray images, a computer aided detection program for proximal caries was established with Matlab and Visual C++. The whole process for caries lesion detection included image import and preprocessing, measuring average gray value of air area, choosing region of interest and calculating gray value, defining the caries areas. The program was used to examine 90 proximal surfaces from 45 extracted human teeth collected from Peking University School and Hospital of Stomatology. The teeth were then scanned with a CBCT scanner (Promax 3D). The proximal surfaces of the teeth were respectively detected by caries detection program and scored by human observer for the extent of lesions with 6-level-scale. With histologic examination serving as the reference standard, the caries detection program and the human observer performances were assessed with receiver operating characteristic (ROC) curves. Student t -test was used to analyze the areas under the ROC curves (AUC) for the differences between caries detection program and human observer. Spearman correlation coefficient was used to analyze the detection accuracy of caries depth. Results: For the diagnosis of proximal caries in CBCT images, the AUC values of human observers and caries detection program were 0.632 and 0.703, respectively. There was a statistically significant difference between the AUC values ( P= 0.023). The correlation between program performance and gold standard (correlation coefficient r (s)=0.525) was higher than that of observer performance and gold standard ( r (s)=0.457) and there was a statistically significant difference between the correlation coefficients ( P= 0.000). Conclusions: The program that automatically detects dental proximal caries lesions could improve the diagnostic value of CBCT images.

  20. How learning might strengthen existing visual object representations in human object-selective cortex.

    PubMed

    Brants, Marijke; Bulthé, Jessica; Daniels, Nicky; Wagemans, Johan; Op de Beeck, Hans P

    2016-02-15

    Visual object perception is an important function in primates which can be fine-tuned by experience, even in adults. Which factors determine the regions and the neurons that are modified by learning is still unclear. Recently, it was proposed that the exact cortical focus and distribution of learning effects might depend upon the pre-learning mapping of relevant functional properties and how this mapping determines the informativeness of neural units for the stimuli and the task to be learned. From this hypothesis we would expect that visual experience would strengthen the pre-learning distributed functional map of the relevant distinctive object properties. Here we present a first test of this prediction in twelve human subjects who were trained in object categorization and differentiation, preceded and followed by a functional magnetic resonance imaging session. Specifically, training increased the distributed multi-voxel pattern information for trained object distinctions in object-selective cortex, resulting in a generalization from pre-training multi-voxel activity patterns to after-training activity patterns. Simulations show that the increased selectivity combined with the inter-session generalization is consistent with a training-induced strengthening of a pre-existing selectivity map. No training-related neural changes were detected in other regions. In sum, training to categorize or individuate objects strengthened pre-existing representations in human object-selective cortex, providing a first indication that the neuroanatomical distribution of learning effects depends upon the pre-learning mapping of visual object properties. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Three-dimensional multispectral optoacoustic mesoscopy reveals melanin and blood oxygenation in human skin in vivo.

    PubMed

    Schwarz, Mathias; Buehler, Andreas; Aguirre, Juan; Ntziachristos, Vasilis

    2016-01-01

    Optical imaging plays a major role in disease detection in dermatology. However, current optical methods are limited by lack of three-dimensional detection of pathophysiological parameters within skin. It was recently shown that single-wavelength optoacoustic (photoacoustic) mesoscopy resolves skin morphology, i.e. melanin and blood vessels within epidermis and dermis. In this work we employed illumination at multiple wavelengths for enabling three-dimensional multispectral optoacoustic mesoscopy (MSOM) of natural chromophores in human skin in vivo operating at 15-125 MHz. We employ a per-pulse tunable laser to inherently co-register spectral datasets, and reveal previously undisclosed insights of melanin, and blood oxygenation in human skin. We further reveal broadband absorption spectra of specific skin compartments. We discuss the potential of MSOM for label-free visualization of physiological biomarkers in skin in vivo. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. SU-G-IeP4-09: Method of Human Eye Aberration Measurement Using Plenoptic Camera Over Large Field of View

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lv, Yang; Wang, Ruixing; Ma, Haotong

    Purpose: The measurement based on Shack-Hartmann wave-front sensor(WFS), obtaining both the high and low order wave-front aberrations simultaneously and accurately, has been applied in the detection of human eyes aberration in recent years. However, Its application is limited by the small field of view (FOV), slight eye movement leads the optical bacon image exceeds the lenslet array which result in uncertain detection error. To overcome difficulties of precise eye location, the capacity of detecting eye wave-front aberration over FOV much larger than simply a single conjugate Hartmann WFS accurately and simultaneously is demanded. Methods: Plenoptic camera’s lenslet array subdivides themore » aperture light-field in spatial frequency domain, capture the 4-D light-field information. Data recorded by plenoptic cameras can be used to extract the wave-front phases associated to the eyes aberration. The corresponding theoretical model and simulation system is built up in this article to discuss wave-front measurement performance when utilizing plenoptic camera as wave-front sensor. Results: The simulation results indicate that the plenoptic wave-front method can obtain both the high and low order eyes wave-front aberration with the same accuracy as conventional system in single visual angle detectionand over FOV much larger than simply a single conjugate Hartmann systems. Meanwhile, simulation results show that detection of eye aberrations wave-front in different visual angle can be achieved effectively and simultaneously by plenoptic method, by both point and extended optical beacon from the eye. Conclusion: Plenoptic wave-front method possesses the feasibility in eye aberrations wave-front detection. With larger FOV, the method can effectively reduce the detection error brought by imprecise eye location and simplify the eye aberrations wave-front detection system comparing with which based on Shack-Hartmann WFS. Unique advantage of the plenoptic method lies in obtaining wave-front in different visual angle simultaneously, which provides an approach in building up 3-D model of eye refractor tomographically. Funded by the key Laboratory of High Power Laser and Physics, CAS Research Project of National University of Defense Technology No. JC13-07-01; National Natural Science Foundation of China No. 61205144.« less

  3. Cardiac-induced localized thoracic motion detected by a fiber optic sensing scheme

    NASA Astrophysics Data System (ADS)

    Allsop, Thomas; Lloyd, Glynn; Bhamber, Ranjeet S.; Hadzievski, Ljupco; Halliday, Michael; Webb, David J.; Bennion, Ian

    2014-11-01

    The cardiovascular health of the human population is a major concern for medical clinicians, with cardiovascular diseases responsible for 48% of all deaths worldwide, according to the World Health Organization. The development of new diagnostic tools that are practicable and economical to scrutinize the cardiovascular health of humans is a major driver for clinicians. We offer a new technique to obtain seismocardiographic signals up to 54 Hz covering both ballistocardiography (below 20 Hz) and audible heart sounds (20 Hz upward), using a system based on curvature sensors formed from fiber optic long period gratings. This system can visualize the real-time three-dimensional (3-D) mechanical motion of the heart by using the data from the sensing array in conjunction with a bespoke 3-D shape reconstruction algorithm. Visualization is demonstrated by adhering three to four sensors on the outside of the thorax and in close proximity to the apex of the heart; the sensing scheme revealed a complex motion of the heart wall next to the apex region of the heart. The detection scheme is low-cost, portable, easily operated and has the potential for ambulatory applications.

  4. Dynamic optical projection of acquired luminescence for aiding oncologic surgery

    NASA Astrophysics Data System (ADS)

    Sarder, Pinaki; Gullicksrud, Kyle; Mondal, Suman; Sudlow, Gail P.; Achilefu, Samuel; Akers, Walter J.

    2013-12-01

    Optical imaging enables real-time visualization of intrinsic and exogenous contrast within biological tissues. Applications in human medicine have demonstrated the power of fluorescence imaging to enhance visualization in dermatology, endoscopic procedures, and open surgery. Although few optical contrast agents are available for human medicine at this time, fluorescence imaging is proving to be a powerful tool in guiding medical procedures. Recently, intraoperative detection of fluorescent molecular probes that target cell-surface receptors has been reported for improvement in oncologic surgery in humans. We have developed a novel system, optical projection of acquired luminescence (OPAL), to further enhance real-time guidance of open oncologic surgery. In this method, collected fluorescence intensity maps are projected onto the imaged surface rather than via wall-mounted display monitor. To demonstrate proof-of-principle for OPAL applications in oncologic surgery, lymphatic transport of indocyanine green was visualized in live mice for intraoperative identification of sentinel lymph nodes. Subsequently, peritoneal tumors in a murine model of breast cancer metastasis were identified using OPAL after systemic administration of a tumor-selective fluorescent molecular probe. These initial results clearly show that OPAL can enhance adoption and ease-of-use of fluorescence imaging in oncologic procedures relative to existing state-of-the-art intraoperative imaging systems.

  5. Topographic contribution of early visual cortex to short-term memory consolidation: a transcranial magnetic stimulation study.

    PubMed

    van de Ven, Vincent; Jacobs, Christianne; Sack, Alexander T

    2012-01-04

    The neural correlates for retention of visual information in visual short-term memory are considered separate from those of sensory encoding. However, recent findings suggest that sensory areas may play a role also in short-term memory. We investigated the functional relevance, spatial specificity, and temporal characteristics of human early visual cortex in the consolidation of capacity-limited topographic visual memory using transcranial magnetic stimulation (TMS). Topographically specific TMS pulses were delivered over lateralized occipital cortex at 100, 200, or 400 ms into the retention phase of a modified change detection task with low or high memory loads. For the high but not the low memory load, we found decreased memory performance for memory trials in the visual field contralateral, but not ipsilateral to the side of TMS, when pulses were delivered at 200 ms into the retention interval. A behavioral version of the TMS experiment, in which a distractor stimulus (memory mask) replaced the TMS pulses, further corroborated these findings. Our findings suggest that retinotopic visual cortex contributes to the short-term consolidation of topographic visual memory during early stages of the retention of visual information. Further, TMS-induced interference decreased the strength (amplitude) of the memory representation, which most strongly affected the high memory load trials.

  6. Physiological modeling for detecting degree of perception of a color-deficient person.

    PubMed

    Rajalakshmi, T; Prince, Shanthi

    2017-04-01

    Physiological modeling of retina plays a vital role in the development of high-performance image processing methods to produce better visual perception. People with normal vision have an ability to discern different colors. The situation is different in the case of people with color blindness. The aim of this work is to develop a human visual system model for detecting the level of perception of people with red, green and blue deficiency by considering properties like luminance, spatial and temporal frequencies. Simulation results show that in the photoreceptor, outer plexiform and inner plexiform layers, the energy and intensity level of the red, green and blue component for a normal person is proved to be significantly higher than for dichromats. The proposed method explains with appropriate results that red and blue color blindness people could not perceive red and blue color completely.

  7. Perceptual compression of magnitude-detected synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Gorman, John D.; Werness, Susan A.

    1994-01-01

    A perceptually-based approach for compressing synthetic aperture radar (SAR) imagery is presented. Key components of the approach are a multiresolution wavelet transform, a bit allocation mask based on an empirical human visual system (HVS) model, and hybrid scalar/vector quantization. Specifically, wavelet shrinkage techniques are used to segregate wavelet transform coefficients into three components: local means, edges, and texture. Each of these three components is then quantized separately according to a perceptually-based bit allocation scheme. Wavelet coefficients associated with local means and edges are quantized using high-rate scalar quantization while texture information is quantized using low-rate vector quantization. The impact of the perceptually-based multiresolution compression algorithm on visual image quality, impulse response, and texture properties is assessed for fine-resolution magnitude-detected SAR imagery; excellent image quality is found at bit rates at or above 1 bpp along with graceful performance degradation at rates below 1 bpp.

  8. Multiple groups of orientation-selective visual mechanisms underlying rapid orientated-line detection.

    PubMed Central

    Foster, D H; Westland, S

    1998-01-01

    Visual search for an edge or line element differing in orientation from a background of other edge or line elements can be performed rapidly and effortlessly. In this study, based on psychophysical measurements with ten human observers, threshold values of the angle between a target and background line elements were obtained as functions of background-element orientation, in brief masked displays. A repeated-loess analysis of the threshold functions suggested the existence of several groups of orientation-selective mechanisms contributing to rapid orientated-line detection; specifically, coarse, intermediate and fine mechanisms with preferred orientations spaced at angles of approximately 90 degrees, 35 degrees, and 10 degrees-25 degrees, respectively. The preferred orientations of coarse and some intermediate mechanisms coincided with the vertical or horizontal of the frontoparallel plane, but the preferred orientations of fine mechanisms varied randomly from observer to observer, possibly reflecting individual variations in neuronal sampling characteristics. PMID:9753784

  9. Method for enhancing single-trial P300 detection by introducing the complexity degree of image information in rapid serial visual presentation tasks

    PubMed Central

    Lin, Zhimin; Zeng, Ying; Tong, Li; Zhang, Hangming; Zhang, Chi

    2017-01-01

    The application of electroencephalogram (EEG) generated by human viewing images is a new thrust in image retrieval technology. A P300 component in the EEG is induced when the subjects see their point of interest in a target image under the rapid serial visual presentation (RSVP) experimental paradigm. We detected the single-trial P300 component to determine whether a subject was interested in an image. In practice, the latency and amplitude of the P300 component may vary in relation to different experimental parameters, such as target probability and stimulus semantics. Thus, we proposed a novel method, Target Recognition using Image Complexity Priori (TRICP) algorithm, in which the image information is introduced in the calculation of the interest score in the RSVP paradigm. The method combines information from the image and EEG to enhance the accuracy of single-trial P300 detection on the basis of traditional single-trial P300 detection algorithm. We defined an image complexity parameter based on the features of the different layers of a convolution neural network (CNN). We used the TRICP algorithm to compute for the complexity of an image to quantify the effect of different complexity images on the P300 components and training specialty classifier according to the image complexity. We compared TRICP with the HDCA algorithm. Results show that TRICP is significantly higher than the HDCA algorithm (Wilcoxon Sign Rank Test, p<0.05). Thus, the proposed method can be used in other and visual task-related single-trial event-related potential detection. PMID:29283998

  10. Light-weight analyzer for odor recognition

    DOEpatents

    Vass, Arpad A; Wise, Marcus B

    2014-05-20

    The invention provides a light weight analyzer, e.g., detector, capable of locating clandestine graves. The detector utilizes the very specific and unique chemicals identified in the database of human decompositional odor. This detector, based on specific chemical compounds found relevant to human decomposition, is the next step forward in clandestine grave detection and will take the guess-work out of current methods using canines and ground-penetrating radar, which have historically been unreliable. The detector is self contained, portable and built for field use. Both visual and auditory cues are provided to the operator.

  11. Scales drive detection, attention, and memory of snakes in wild vervet monkeys (Chlorocebus pygerythrus).

    PubMed

    Isbell, Lynne A; Etting, Stephanie F

    2017-01-01

    Predatory snakes are argued to have been largely responsible for the origin of primates via selection favoring expansion of the primate visual system, and even today snakes can be deadly to primates. Neurobiological research is now beginning to reveal the mechanisms underlying the ability of primates (including humans) to detect snakes more rapidly than other stimuli. However, the visual cues allowing rapid detection of snakes, and the cognitive and ecological conditions contributing to faster detection, are unclear. Since snakes are often partially obscured by vegetation, the more salient cues are predicted to occur in small units. Here we tested for the salience of snake scales as the smallest of potential visual cues by presenting four groups of wild vervet monkeys (Chlorocebus pytherythrus) with a gopher snake (Pituophis catenifer) skin occluded except for no more than 2.7 cm, in natural form and flat, the latter to control for even small curvilinear cues from their unusual body shape. Each of these treatments was preceded by a treatment without the snakeskin, the first to provide a baseline, and the second, to test for vigilance and memory recall after exposure to the snakeskin. We found that (1) vervets needed only a small portion of snakeskin for detection, (2) snake scales alone were sufficient for detection, (3) latency to detect the snakeskin was longer with more extensive and complex ground cover, and (4) vervets that were exposed to the snakeskin remembered where they last saw "snakes", as indicated by increased wariness near the occluding landmarks in the absence of the snakeskin and more rapid detection of the next presented snakeskin. Unexpectedly, adult males did not detect the snakeskin as well as adult females and juveniles. These findings extend our knowledge of the complex ecological and evolutionary relationships between snakes and primates.

  12. Influence of visual clutter on the effect of navigated safety inspection: a case study on elevator installation.

    PubMed

    Liao, Pin-Chao; Sun, Xinlu; Liu, Mei; Shih, Yu-Nien

    2018-01-11

    Navigated safety inspection based on task-specific checklists can increase the hazard detection rate, theoretically with interference from scene complexity. Visual clutter, a proxy of scene complexity, can theoretically impair visual search performance, but its impact on the effect of safety inspection performance remains to be explored for the optimization of navigated inspection. This research aims to explore whether the relationship between working memory and hazard detection rate is moderated by visual clutter. Based on a perceptive model of hazard detection, we: (a) developed a mathematical influence model for construction hazard detection; (b) designed an experiment to observe the performance of hazard detection rate with adjusted working memory under different levels of visual clutter, while using an eye-tracking device to observe participants' visual search processes; (c) utilized logistic regression to analyze the developed model under various visual clutter. The effect of a strengthened working memory on the detection rate through increased search efficiency is more apparent in high visual clutter. This study confirms the role of visual clutter in construction-navigated inspections, thus serving as a foundation for the optimization of inspection planning.

  13. Task-dependent enhancement of facial expression and identity representations in human cortex.

    PubMed

    Dobs, Katharina; Schultz, Johannes; Bülthoff, Isabelle; Gardner, Justin L

    2018-05-15

    What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topographically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increased during the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively, increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  14. Frequency of gamma oscillations in humans is modulated by velocity of visual motion

    PubMed Central

    Butorina, Anna V.; Sysoeva, Olga V.; Prokofyev, Andrey O.; Nikolaeva, Anastasia Yu.; Stroganova, Tatiana A.

    2015-01-01

    Gamma oscillations are generated in networks of inhibitory fast-spiking (FS) parvalbumin-positive (PV) interneurons and pyramidal cells. In animals, gamma frequency is modulated by the velocity of visual motion; the effect of velocity has not been evaluated in humans. In this work, we have studied velocity-related modulations of gamma frequency in children using MEG/EEG. We also investigated whether such modulations predict the prominence of the “spatial suppression” effect (Tadin D, Lappin JS, Gilroy LA, Blake R. Nature 424: 312-315, 2003) that is thought to depend on cortical center-surround inhibitory mechanisms. MEG/EEG was recorded in 27 normal boys aged 8–15 yr while they watched high-contrast black-and-white annular gratings drifting with velocities of 1.2, 3.6, and 6.0°/s and performed a simple detection task. The spatial suppression effect was assessed in a separate psychophysical experiment. MEG gamma oscillation frequency increased while power decreased with increasing velocity of visual motion. In EEG, the effects were less reliable. The frequencies of the velocity-specific gamma peaks were 64.9, 74.8, and 87.1 Hz for the slow, medium, and fast motions, respectively. The frequency of the gamma response elicited during slow and medium velocity of visual motion decreased with subject age, whereas the range of gamma frequency modulation by velocity increased with age. The frequency modulation range predicted spatial suppression even after controlling for the effect of age. We suggest that the modulation of the MEG gamma frequency by velocity of visual motion reflects excitability of cortical inhibitory circuits and can be used to investigate their normal and pathological development in the human brain. PMID:25925324

  15. Causal evidence of the involvement of the number form area in the visual detection of numbers and letters.

    PubMed

    Grotheer, Mareike; Ambrus, Géza Gergely; Kovács, Gyula

    2016-05-15

    Recent research suggests the existence of a visual area selectively processing numbers in the human inferior temporal cortex (number form area (NFA); Abboud et al., 2015; Grotheer et al., 2016; Shum et al., 2013). The NFA is thought to be involved in the preferential encoding of numbers over false characters, letters and non-number words (Grotheer et al., 2016; Shum et al., 2013), independently of the sensory modality (Abboud et al., 2015). However, it is not yet clear if this area is mandatory for normal number processing. The present study exploited the fact that high-resolution fMRI can be applied to identify the NFA individually (Grotheer et al., 2016) and tested if transcranial magnetic stimulation (TMS) of this area interferes with stimulus processing in a selective manner. Double-pulse TMS targeted at the right NFA significantly impaired the detection of briefly presented and masked Arabic numbers in comparison to vertex stimulation. This suggests the NFA to be necessary for fluent number processing. Surprisingly, TMS of the NFA also impaired the detection of Roman letters. On the other hand, stimulation of the lateral occipital complex (LO) had neither an effect on the detection of numbers nor on letters. Our results show, for the first time, that the NFA is causally involved in the early visual processing of numbers as well as of letters. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Testing the snake-detection hypothesis: larger early posterior negativity in humans to pictures of snakes than to pictures of other reptiles, spiders and slugs.

    PubMed

    Van Strien, Jan W; Franken, Ingmar H A; Huijding, Jorg

    2014-01-01

    According to the snake detection hypothesis (Isbell, 2006), fear specifically of snakes may have pushed evolutionary changes in the primate visual system allowing pre-attentional visual detection of fearful stimuli. A previous study demonstrated that snake pictures, when compared to spiders or bird pictures, draw more early attention as reflected by larger early posterior negativity (EPN). Here we report two studies that further tested the snake detection hypothesis. In Study 1, we tested whether the enlarged EPN is specific for snakes or also generalizes to other reptiles. Twenty-four healthy, non-phobic women watched the random rapid serial presentation of snake, crocodile, and turtle pictures. The EPN was scored as the mean activity at occipital electrodes (PO3, O1, Oz, PO4, O2) in the 225-300 ms time window after picture onset. The EPN was significantly larger for snake pictures than for pictures of the other reptiles. In Study 2, we tested whether disgust plays a role in the modulation of the EPN and whether preferential processing of snakes also can be found in men. 12 men and 12 women watched snake, spider, and slug pictures. Both men and women exhibited the largest EPN amplitudes to snake pictures, intermediate amplitudes to spider pictures and the smallest amplitudes to slug pictures. Disgust ratings were not associated with EPN amplitudes. The results replicate previous findings and suggest that ancestral priorities modulate the early capture of visual attention.

  17. Distribution of DNA in human Sertoli cell nucleoli.

    PubMed

    Mosgöller, W; Schöfer, C; Derenzini, M; Steiner, M; Maier, U; Wachtler, F

    1993-10-01

    For better understanding of nucleolar architecture, different techniques have been used to localize DNA within the dense fibrillar component (DF) or within the fibrillar centers (FC) by electron microscopy (EM). Since it still remains controversial which components contain DNA, we investigated the distribution of DNA in human Sertoli cells using various approaches. In situ hybridization (ISH) with human total genomic DNA as probe and the use of anti-DNA antibody were followed by immunogold detection. This allowed statistical evaluation of the signal density over individual components. The Feulgen-like osmium-ammine (OA) technique for the selective visualization of DNA was also applied. The anti-DNA antibodies detected DNA in mitochondria, in chromatin, and in the DF of the nucleolus. ISH using human total genomic DNA showed similar labeling patterns. The OA technique revealed DNA filaments in the FC and focal agglomerates of decondensed DNA within the DF. We conclude that (a) EM staining techniques that utilize colloidal gold appear to be less sensitive for DNA detection than the OA method, (b) the DF consists of different domains with different molecular composition, and (c) decondensed DNA is not necessarily confined to one particular nucleolar component.

  18. Detecting activity-evoked pH changes in human brain

    PubMed Central

    Magnotta, Vincent A.; Heo, Hye-Young; Dlouhy, Brian J.; Dahdaleh, Nader S.; Follmer, Robin L.; Thedens, Daniel R.; Welsh, Michael J.; Wemmie, John A.

    2012-01-01

    Localized pH changes have been suggested to occur in the brain during normal function. However, the existence of such pH changes has also been questioned. Lack of methods for noninvasively measuring pH with high spatial and temporal resolution has limited insight into this issue. Here we report that a magnetic resonance imaging (MRI) strategy, T1 relaxation in the rotating frame (T1ρ), is sufficiently sensitive to detect widespread pH changes in the mouse and human brain evoked by systemically manipulating carbon dioxide or bicarbonate. Moreover, T1ρ detected a localized acidosis in the human visual cortex induced by a flashing checkerboard. Lactate measurements and pH-sensitive 31P spectroscopy at the same site also identified a localized acidosis. Consistent with the established role for pH in blood flow recruitment, T1ρ correlated with blood oxygenation level-dependent contrast commonly used in functional MRI. However, T1ρ was not directly sensitive to blood oxygen content. These observations indicate that localized pH fluctuations occur in the human brain during normal function. Furthermore, they suggest a unique functional imaging strategy based on pH that is independent of traditional functional MRI contrast mechanisms. PMID:22566645

  19. Image-based computational quantification and visualization of genetic alterations and tumour heterogeneity

    PubMed Central

    Zhong, Qing; Rüschoff, Jan H.; Guo, Tiannan; Gabrani, Maria; Schüffler, Peter J.; Rechsteiner, Markus; Liu, Yansheng; Fuchs, Thomas J.; Rupp, Niels J.; Fankhauser, Christian; Buhmann, Joachim M.; Perner, Sven; Poyet, Cédric; Blattner, Miriam; Soldini, Davide; Moch, Holger; Rubin, Mark A.; Noske, Aurelia; Rüschoff, Josef; Haffner, Michael C.; Jochum, Wolfram; Wild, Peter J.

    2016-01-01

    Recent large-scale genome analyses of human tissue samples have uncovered a high degree of genetic alterations and tumour heterogeneity in most tumour entities, independent of morphological phenotypes and histopathological characteristics. Assessment of genetic copy-number variation (CNV) and tumour heterogeneity by fluorescence in situ hybridization (ISH) provides additional tissue morphology at single-cell resolution, but it is labour intensive with limited throughput and high inter-observer variability. We present an integrative method combining bright-field dual-colour chromogenic and silver ISH assays with an image-based computational workflow (ISHProfiler), for accurate detection of molecular signals, high-throughput evaluation of CNV, expressive visualization of multi-level heterogeneity (cellular, inter- and intra-tumour heterogeneity), and objective quantification of heterogeneous genetic deletions (PTEN) and amplifications (19q12, HER2) in diverse human tumours (prostate, endometrial, ovarian and gastric), using various tissue sizes and different scanners, with unprecedented throughput and reproducibility. PMID:27052161

  20. Image-based computational quantification and visualization of genetic alterations and tumour heterogeneity.

    PubMed

    Zhong, Qing; Rüschoff, Jan H; Guo, Tiannan; Gabrani, Maria; Schüffler, Peter J; Rechsteiner, Markus; Liu, Yansheng; Fuchs, Thomas J; Rupp, Niels J; Fankhauser, Christian; Buhmann, Joachim M; Perner, Sven; Poyet, Cédric; Blattner, Miriam; Soldini, Davide; Moch, Holger; Rubin, Mark A; Noske, Aurelia; Rüschoff, Josef; Haffner, Michael C; Jochum, Wolfram; Wild, Peter J

    2016-04-07

    Recent large-scale genome analyses of human tissue samples have uncovered a high degree of genetic alterations and tumour heterogeneity in most tumour entities, independent of morphological phenotypes and histopathological characteristics. Assessment of genetic copy-number variation (CNV) and tumour heterogeneity by fluorescence in situ hybridization (ISH) provides additional tissue morphology at single-cell resolution, but it is labour intensive with limited throughput and high inter-observer variability. We present an integrative method combining bright-field dual-colour chromogenic and silver ISH assays with an image-based computational workflow (ISHProfiler), for accurate detection of molecular signals, high-throughput evaluation of CNV, expressive visualization of multi-level heterogeneity (cellular, inter- and intra-tumour heterogeneity), and objective quantification of heterogeneous genetic deletions (PTEN) and amplifications (19q12, HER2) in diverse human tumours (prostate, endometrial, ovarian and gastric), using various tissue sizes and different scanners, with unprecedented throughput and reproducibility.

  1. Subnuclear localization, rates and effectiveness of UVC-induced unscheduled DNA synthesis visualized by fluorescence widefield, confocal and super-resolution microscopy.

    PubMed

    Pierzyńska-Mach, Agnieszka; Szczurek, Aleksander; Cella Zanacchi, Francesca; Pennacchietti, Francesca; Drukała, Justyna; Diaspro, Alberto; Cremer, Christoph; Darzynkiewicz, Zbigniew; Dobrucki, Jurek W

    2016-01-01

    Unscheduled DNA synthesis (UDS) is the final stage of the process of repair of DNA lesions induced by UVC. We detected UDS using a DNA precursor, 5-ethynyl-2'-deoxyuridine (EdU). Using wide-field, confocal and super-resolution fluorescence microscopy and normal human fibroblasts, derived from healthy subjects, we demonstrate that the sub-nuclear pattern of UDS detected via incorporation of EdU is different from that when BrdU is used as DNA precursor. EdU incorporation occurs evenly throughout chromatin, as opposed to just a few small and large repair foci detected by BrdU. We attribute this difference to the fact that BrdU antibody is of much larger size than EdU, and its accessibility to the incorporated precursor requires the presence of denatured sections of DNA. It appears that under the standard conditions of immunocytochemical detection of BrdU only fragments of DNA of various length are being denatured. We argue that, compared with BrdU, the UDS pattern visualized by EdU constitutes a more faithful representation of sub-nuclear distribution of the final stage of nucleotide excision repair induced by UVC. Using the optimized integrated EdU detection procedure we also measured the relative amount of the DNA precursor incorporated by cells during UDS following exposure to various doses of UVC. Also described is the high degree of heterogeneity in terms of the UVC-induced EdU incorporation per cell, presumably reflecting various DNA repair efficiencies or differences in the level of endogenous dT competing with EdU within a population of normal human fibroblasts.

  2. Subnuclear localization, rates and effectiveness of UVC-induced unscheduled DNA synthesis visualized by fluorescence widefield, confocal and super-resolution microscopy

    PubMed Central

    Pierzyńska-Mach, Agnieszka; Szczurek, Aleksander; Cella Zanacchi, Francesca; Pennacchietti, Francesca; Drukała, Justyna; Diaspro, Alberto; Cremer, Christoph; Darzynkiewicz, Zbigniew; Dobrucki, Jurek W.

    2016-01-01

    ABSTRACT Unscheduled DNA synthesis (UDS) is the final stage of the process of repair of DNA lesions induced by UVC. We detected UDS using a DNA precursor, 5-ethynyl-2′-deoxyuridine (EdU). Using wide-field, confocal and super-resolution fluorescence microscopy and normal human fibroblasts, derived from healthy subjects, we demonstrate that the sub-nuclear pattern of UDS detected via incorporation of EdU is different from that when BrdU is used as DNA precursor. EdU incorporation occurs evenly throughout chromatin, as opposed to just a few small and large repair foci detected by BrdU. We attribute this difference to the fact that BrdU antibody is of much larger size than EdU, and its accessibility to the incorporated precursor requires the presence of denatured sections of DNA. It appears that under the standard conditions of immunocytochemical detection of BrdU only fragments of DNA of various length are being denatured. We argue that, compared with BrdU, the UDS pattern visualized by EdU constitutes a more faithful representation of sub-nuclear distribution of the final stage of nucleotide excision repair induced by UVC. Using the optimized integrated EdU detection procedure we also measured the relative amount of the DNA precursor incorporated by cells during UDS following exposure to various doses of UVC. Also described is the high degree of heterogeneity in terms of the UVC-induced EdU incorporation per cell, presumably reflecting various DNA repair efficiencies or differences in the level of endogenous dT competing with EdU within a population of normal human fibroblasts. PMID:27097376

  3. The Gap Detection Test: Can It Be Used to Diagnose Tinnitus?

    PubMed Central

    Boyen, Kris; Başkent, Deniz

    2015-01-01

    Objectives: Animals with induced tinnitus showed difficulties in detecting silent gaps in sounds, suggesting that the tinnitus percept may be filling the gap. The main purpose of this study was to evaluate the applicability of this approach to detect tinnitus in human patients. The authors first hypothesized that gap detection would be impaired in patients with tinnitus, and second, that gap detection would be more impaired at frequencies close to the tinnitus frequency of the patient. Design: Twenty-two adults with bilateral tinnitus, 20 age-matched and hearing loss–matched subjects without tinnitus, and 10 young normal-hearing subjects participated in the study. To determine the characteristics of the tinnitus, subjects matched an external sound to their perceived tinnitus in pitch and loudness. To determine the minimum detectable gap, the gap threshold, an adaptive psychoacoustic test was performed three times by each subject. In this gap detection test, four different stimuli, with various frequencies and bandwidths, were presented at three intensity levels each. Results: Similar to previous reports of gap detection, increasing sensation level yielded shorter gap thresholds for all stimuli in all groups. Interestingly, the tinnitus group did not display elevated gap thresholds in any of the four stimuli. Moreover, visual inspection of the data revealed no relation between gap detection performance and perceived tinnitus pitch. Conclusions: These findings show that tinnitus in humans has no effect on the ability to detect gaps in auditory stimuli. Thus, the testing procedure in its present form is not suitable for clinical detection of tinnitus in humans. PMID:25822647

  4. Safety assessment in macaques of light exposures for functional two-photon ophthalmoscopy in humans

    PubMed Central

    Schwarz, Christina; Sharma, Robin; Fischer, William S.; Chung, Mina; Palczewska, Grazyna; Palczewski, Krzysztof; Williams, David R.; Hunter, Jennifer J.

    2016-01-01

    Two-photon ophthalmoscopy has potential for in vivo assessment of function of normal and diseased retina. However, light safety of the sub-100 fs laser typically used is a major concern and safety standards are not well established. To test the feasibility of safe in vivo two-photon excitation fluorescence (TPEF) imaging of photoreceptors in humans, we examined the effects of ultrashort pulsed light and the required light levels with a variety of clinical and high resolution imaging methods in macaques. The only measure that revealed a significant effect due to exposure to pulsed light within existing safety standards was infrared autofluorescence (IRAF) intensity. No other structural or functional alterations were detected by other imaging techniques for any of the exposures. Photoreceptors and retinal pigment epithelium appeared normal in adaptive optics images. No effect of repeated exposures on TPEF time course was detected, suggesting that visual cycle function was maintained. If IRAF reduction is hazardous, it is the only hurdle to applying two-photon retinal imaging in humans. To date, no harmful effects of IRAF reduction have been detected. PMID:28018732

  5. The Neural Dynamics of Attentional Selection in Natural Scenes.

    PubMed

    Kaiser, Daniel; Oosterhof, Nikolaas N; Peelen, Marius V

    2016-10-12

    The human visual system can only represent a small subset of the many objects present in cluttered scenes at any given time, such that objects compete for representation. Despite these processing limitations, the detection of object categories in cluttered natural scenes is remarkably rapid. How does the brain efficiently select goal-relevant objects from cluttered scenes? In the present study, we used multivariate decoding of magneto-encephalography (MEG) data to track the neural representation of within-scene objects as a function of top-down attentional set. Participants detected categorical targets (cars or people) in natural scenes. The presence of these categories within a scene was decoded from MEG sensor patterns by training linear classifiers on differentiating cars and people in isolation and testing these classifiers on scenes containing one of the two categories. The presence of a specific category in a scene could be reliably decoded from MEG response patterns as early as 160 ms, despite substantial scene clutter and variation in the visual appearance of each category. Strikingly, we find that these early categorical representations fully depend on the match between visual input and top-down attentional set: only objects that matched the current attentional set were processed to the category level within the first 200 ms after scene onset. A sensor-space searchlight analysis revealed that this early attention bias was localized to lateral occipitotemporal cortex, reflecting top-down modulation of visual processing. These results show that attention quickly resolves competition between objects in cluttered natural scenes, allowing for the rapid neural representation of goal-relevant objects. Efficient attentional selection is crucial in many everyday situations. For example, when driving a car, we need to quickly detect obstacles, such as pedestrians crossing the street, while ignoring irrelevant objects. How can humans efficiently perform such tasks, given the multitude of objects contained in real-world scenes? Here we used multivariate decoding of magnetoencephalogaphy data to characterize the neural underpinnings of attentional selection in natural scenes with high temporal precision. We show that brain activity quickly tracks the presence of objects in scenes, but crucially only for those objects that were immediately relevant for the participant. These results provide evidence for fast and efficient attentional selection that mediates the rapid detection of goal-relevant objects in real-world environments. Copyright © 2016 the authors 0270-6474/16/3610522-07$15.00/0.

  6. Autonomous spacecraft landing through human pre-attentive vision.

    PubMed

    Schiavone, Giuseppina; Izzo, Dario; Simões, Luís F; de Croon, Guido C H E

    2012-06-01

    In this work, we exploit a computational model of human pre-attentive vision to guide the descent of a spacecraft on extraterrestrial bodies. Providing the spacecraft with high degrees of autonomy is a challenge for future space missions. Up to present, major effort in this research field has been concentrated in hazard avoidance algorithms and landmark detection, often by reference to a priori maps, ranked by scientists according to specific scientific criteria. Here, we present a bio-inspired approach based on the human ability to quickly select intrinsically salient targets in the visual scene; this ability is fundamental for fast decision-making processes in unpredictable and unknown circumstances. The proposed system integrates a simple model of the spacecraft and optimality principles which guarantee minimum fuel consumption during the landing procedure; detected salient sites are used for retargeting the spacecraft trajectory, under safety and reachability conditions. We compare the decisions taken by the proposed algorithm with that of a number of human subjects tested under the same conditions. Our results show how the developed algorithm is indistinguishable from the human subjects with respect to areas, occurrence and timing of the retargeting.

  7. Visual Detection of Human Antibodies Using Sugar Chain-Immobilized Fluorescent Nanoparticles: Application as a Point of Care Diagnostic Tool for Guillain-Barré Syndrome.

    PubMed

    Shinchi, Hiroyuki; Yuki, Nobuhiro; Ishida, Hideharu; Hirata, Koichi; Wakao, Masahiro; Suda, Yasuo

    2015-01-01

    Sugar chain binding antibodies have gained substantial attention as biomarkers due to their crucial roles in various disorders. In this study, we developed simple and quick detection method of anti-sugar chain antibodies in sera using our previously developed sugar chain-immobilized fluorescent nanoparticles (SFNPs) for the point-of-care diagnostics. Sugar chain structure on SFNPs was modified with the sugar moieties of the GM1 ganglioside via our original linker molecule to detect anti-GM1 antibodies. The structures and densities of the sugar moieties immobilized on the nanoparticles were evaluated in detail using lectins and sera containing anti-GM1 antibodies from patients with Guillain-Barré syndrome, a neurological disorder, as an example of disease involving anti-sugar chain antibodies. When optimized SFNPs were added to sera from patients with Guillain-Barré syndrome, fluorescent aggregates were able to visually detect under UV light in three hours. The sensitivity of the detection method was equivalent to that of the current ELISA method used for the diagnosis of Guillain-Barré syndrome. These results suggest that our method using SFNPs is suitable for the point-of-care diagnostics of diseases involving anti-sugar chain antibodies.

  8. Point of care nucleic acid detection of viable pathogenic bacteria with isothermal RNA amplification based paper biosensor

    NASA Astrophysics Data System (ADS)

    Liu, Hongxing; Xing, Da; Zhou, Xiaoming

    2014-09-01

    Food-borne pathogens such as Listeria monocytogenes have been recognized as a major cause of human infections worldwide, leading to substantial health problems. Food-borne pathogen identification needs to be simpler, cheaper and more reliable than the current traditional methods. Here, we have constructed a low-cost paper biosensor for the detection of viable pathogenic bacteria with the naked eye. In this study, an effective isothermal amplification method was used to amplify the hlyA mRNA gene, a specific RNA marker in Listeria monocytogenes. The amplification products were applied to the paper biosensor to perform a visual test, in which endpoint detection was performed using sandwich hybridization assays. When the RNA products migrated along the paper biosensor by capillary action, the gold nanoparticles accumulated at the designated Test line and Control line. Under optimized experimental conditions, as little as 0.5 pg/μL genomic RNA from Listeria monocytogenes could be detected. The whole assay process, including RNA extraction, amplification, and visualization, can be completed within several hours. The developed method is suitable for point-of-care applications to detect food-borne pathogens, as it can effectively overcome the false-positive results caused by amplifying nonviable Listeria monocytogenes.

  9. Implicit Binding of Facial Features During Change Blindness

    PubMed Central

    Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K.; Astikainen, Piia

    2014-01-01

    Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli. PMID:24498165

  10. Implicit binding of facial features during change blindness.

    PubMed

    Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K; Astikainen, Piia

    2014-01-01

    Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli.

  11. Visual detection of driving while intoxicated. Project interim report : identification of visual cues and development of detection methods

    DOT National Transportation Integrated Search

    1979-01-01

    The report describes the initial phase of a two-phase project on the visual, on-the-road detection of driving while intoxicated (DWI). The purpose of the overall project is to develop and test procedures for enhancing on-the-road detection of DWI. Th...

  12. Strand Displacement Amplification Reaction on Quantum Dot-Encoded Silica Bead for Visual Detection of Multiplex MicroRNAs.

    PubMed

    Qu, Xiaojun; Jin, Haojun; Liu, Yuqian; Sun, Qingjiang

    2018-03-06

    The combination of microbead array, isothermal amplification, and molecular signaling enables the continuous development of next-generation molecular diagnostic techniques. Herein we reported the implementation of nicking endonuclease-assisted strand displacement amplification reaction on quantum dots-encoded microbead (Qbead), and demonstrated its feasibility for multiplexed miRNA assay in real sample. The Qbead featured with well-defined core-shell superstructure with dual-colored quantum dots loaded in silica core and shell, respectively, exhibiting remarkably high optical encoding stability. Specially designed stem-loop-structured probes were immobilized onto the Qbead for specific target recognition and amplification. In the presence of low abundance of miRNA target, the target triggered exponential amplification, producing a large quantity of stem-G-quadruplexes, which could be selectively signaled by a fluorescent G-quadruplex intercalator. In one-step operation, the Qbead-based isothermal amplification and signaling generated emissive "core-shell-satellite" superstructure, changing the Qbead emission-color. The target abundance-dependent emission-color changes of the Qbead allowed direct, visual detection of specific miRNA target. This visualization method achieved limit of detection at the subfemtomolar level with a linear dynamic range of 4.5 logs, and point-mutation discrimination capability for precise miRNA analyses. The array of three encoded Qbeads could simultaneously quantify three miRNA biomarkers in ∼500 human hepatoma carcinoma cells. With the advancements in ease of operation, multiplexing, and visualization capabilities, the isothermal amplification-on-Qbead assay could potentially enable the development of point-of-care diagnostics.

  13. Infrared image enhancement based on the edge detection and mathematical morphology

    NASA Astrophysics Data System (ADS)

    Zhang, Linlin; Zhao, Yuejin; Dong, Liquan; Liu, Xiaohua; Yu, Xiaomei; Hui, Mei; Chu, Xuhong; Gong, Cheng

    2010-11-01

    The development of the un-cooled infrared imaging technology from military necessity. At present, It is widely applied in industrial, medicine, scientific and technological research and so on. The infrared radiation temperature distribution of the measured object's surface can be observed visually. The collection of infrared images from our laboratory has following characteristics: Strong spatial correlation, Low contrast , Poor visual effect; Without color or shadows because of gray image , and has low resolution; Low definition compare to the visible light image; Many kinds of noise are brought by the random disturbances of the external environment. Digital image processing are widely applied in many areas, it can now be studied up close and in detail in many research field. It has become one kind of important means of the human visual continuation. Traditional methods for image enhancement cannot capture the geometric information of images and tend to amplify noise. In order to remove noise and improve visual effect. Meanwhile, To overcome the above enhancement issues. The mathematical model of FPA unit was constructed based on matrix transformation theory. According to characteristics of FPA, Image enhancement algorithm which combined with mathematical morphology and edge detection are established. First of all, Image profile is obtained by using the edge detection combine with mathematical morphological operators. And then, through filling the template profile by original image to get the ideal background image, The image noise can be removed on the base of the above method. The experiments show that utilizing the proposed algorithm can enhance image detail and the signal to noise ratio.

  14. “To see or not to see: that is the question.” The “Protection-Against-Schizophrenia” (PaSZ) model: evidence from congenital blindness and visuo-cognitive aberrations

    PubMed Central

    Landgraf, Steffen; Osterheider, Michael

    2013-01-01

    The causes of schizophrenia are still unknown. For the last 100 years, though, both “absent” and “perfect” vision have been associated with a lower risk for schizophrenia. Hence, vision itself and aberrations in visual functioning may be fundamental to the development and etiological explanations of the disorder. In this paper, we present the “Protection-Against-Schizophrenia” (PaSZ) model, which grades the risk for developing schizophrenia as a function of an individual's visual capacity. We review two vision perspectives: (1) “Absent” vision or how congenital blindness contributes to PaSZ and (2) “perfect” vision or how aberrations in visual functioning are associated with psychosis. First, we illustrate that, although congenitally blind and sighted individuals acquire similar world representations, blind individuals compensate for behavioral shortcomings through neurofunctional and multisensory reorganization. These reorganizations may indicate etiological explanations for their PaSZ. Second, we demonstrate that visuo-cognitive impairments are fundamental for the development of schizophrenia. Deteriorated visual information acquisition and processing contribute to higher-order cognitive dysfunctions and subsequently to schizophrenic symptoms. Finally, we provide different specific therapeutic recommendations for individuals who suffer from visual impairments (who never developed “normal” vision) and individuals who suffer from visual deterioration (who previously had “normal” visual skills). Rather than categorizing individuals as “normal” and “mentally disordered,” the PaSZ model uses a continuous scale to represent psychiatrically relevant human behavior. This not only provides a scientific basis for more fine-grained diagnostic assessments, earlier detection, and more appropriate therapeutic assignments, but it also outlines a trajectory for unraveling the causes of abnormal psychotic human self- and world-perception. PMID:23847557

  15. Normal form from biological motion despite impaired ventral stream function.

    PubMed

    Gilaie-Dotan, S; Bentin, S; Harel, M; Rees, G; Saygin, A P

    2011-04-01

    We explored the extent to which biological motion perception depends on ventral stream integration by studying LG, an unusual case of developmental visual agnosia. LG has significant ventral stream processing deficits but no discernable structural cortical abnormality. LG's intermediate visual areas and object-sensitive regions exhibit abnormal activation during visual object perception, in contrast to area V5/MT+ which responds normally to visual motion (Gilaie-Dotan, Perry, Bonneh, Malach, & Bentin, 2009). Here, in three studies we used point light displays, which require visual integration, in adaptive threshold experiments to examine LG's ability to detect form from biological and non-biological motion cues. LG's ability to detect and discriminate form from biological motion was similar to healthy controls. In contrast, he was significantly deficient in processing form from non-biological motion. Thus, LG can rely on biological motion cues to perceive human forms, but is considerably impaired in extracting form from non-biological motion. Finally, we found that while LG viewed biological motion, activity in a network of brain regions associated with processing biological motion was functionally correlated with his V5/MT+ activity, indicating that normal inputs from V5/MT+ might suffice to activate his action perception system. These results indicate that processing of biologically moving form can dissociate from other form processing in the ventral pathway. Furthermore, the present results indicate that integrative ventral stream processing is necessary for uncompromised processing of non-biological form from motion. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Dynamic Changes in Phase-Amplitude Coupling Facilitate Spatial Attention Control in Fronto-Parietal Cortex

    PubMed Central

    Szczepanski, Sara M.; Crone, Nathan E.; Kuperman, Rachel A.; Auguste, Kurtis I.; Parvizi, Josef; Knight, Robert T.

    2014-01-01

    Attention is a core cognitive mechanism that allows the brain to allocate limited resources depending on current task demands. A number of frontal and posterior parietal cortical areas, referred to collectively as the fronto-parietal attentional control network, are engaged during attentional allocation in both humans and non-human primates. Numerous studies have examined this network in the human brain using various neuroimaging and scalp electrophysiological techniques. However, little is known about how these frontal and parietal areas interact dynamically to produce behavior on a fine temporal (sub-second) and spatial (sub-centimeter) scale. We addressed how human fronto-parietal regions control visuospatial attention on a fine spatiotemporal scale by recording electrocorticography (ECoG) signals measured directly from subdural electrode arrays that were implanted in patients undergoing intracranial monitoring for localization of epileptic foci. Subjects (n = 8) performed a spatial-cuing task, in which they allocated visuospatial attention to either the right or left visual field and detected the appearance of a target. We found increases in high gamma (HG) power (70–250 Hz) time-locked to trial onset that remained elevated throughout the attentional allocation period over frontal, parietal, and visual areas. These HG power increases were modulated by the phase of the ongoing delta/theta (2–5 Hz) oscillation during attentional allocation. Critically, we found that the strength of this delta/theta phase-HG amplitude coupling predicted reaction times to detected targets on a trial-by-trial basis. These results highlight the role of delta/theta phase-HG amplitude coupling as a mechanism for sub-second facilitation and coordination within human fronto-parietal cortex that is guided by momentary attentional demands. PMID:25157678

  17. Ocular changes in TgF344-AD rat model of Alzheimer's disease.

    PubMed

    Tsai, Yuchun; Lu, Bin; Ljubimov, Alexander V; Girman, Sergey; Ross-Cisneros, Fred N; Sadun, Alfredo A; Svendsen, Clive N; Cohen, Robert M; Wang, Shaomei

    2014-01-29

    Alzheimer's disease (AD) is the most common neurodegenerative disorder characterized by progressive decline in learning, memory, and executive functions. In addition to cognitive and behavioral deficits, vision disturbances have been reported in early stage of AD, well before the diagnosis is clearly established. To further investigate ocular abnormalities, a novel AD transgenic rat model was analyzed. Transgenic (Tg) rats (TgF344-AD) heterozygous for human mutant APPswe/PS1ΔE9 and age-matched wild type (WT) rats, as well as 20 human postmortem retinal samples from both AD and healthy donors were used. Visual function in the rodent was analyzed using the optokinetic response and luminance threshold recording from the superior colliculus. Immunohistochemistry on retinal and brain sections was used to detect various markers including amyloid-β (Aβ) plaques. As expected, Aβ plaques were detected in the hippocampus, cortex, and retina of Tg rats. Plaque-like structures were also found in two AD human whole-mount retinas. The choroidal thickness was significantly reduced in both Tg rat and in AD human eyes when compared with age-matched controls. Tg rat eyes also showed hypertrophic retinal pigment epithelial cells, inflammatory cells, and upregulation of complement factor C3. Although visual acuity was lower in Tg than in WT rats, there was no significant difference in the retinal ganglion cell number and retinal vasculature. In this study, we observed pathological changes in the choroid and in RPE cells in the TgF344-AD rat model; choroidal thinning was observed further in human AD retina. Along with Ab deposition, the inflammatory response was manifested by microglial recruitment and complement activation. Further studies are needed to elucidate the significance and mechanisms of these pathological changes [corrected].

  18. A Mouse Model of Visual Perceptual Learning Reveals Alterations in Neuronal Coding and Dendritic Spine Density in the Visual Cortex.

    PubMed

    Wang, Yan; Wu, Wei; Zhang, Xian; Hu, Xu; Li, Yue; Lou, Shihao; Ma, Xiao; An, Xu; Liu, Hui; Peng, Jing; Ma, Danyi; Zhou, Yifeng; Yang, Yupeng

    2016-01-01

    Visual perceptual learning (VPL) can improve spatial vision in normally sighted and visually impaired individuals. Although previous studies of humans and large animals have explored the neural basis of VPL, elucidation of the underlying cellular and molecular mechanisms remains a challenge. Owing to the advantages of molecular genetic and optogenetic manipulations, the mouse is a promising model for providing a mechanistic understanding of VPL. Here, we thoroughly evaluated the effects and properties of VPL on spatial vision in C57BL/6J mice using a two-alternative, forced-choice visual water task. Briefly, the mice underwent prolonged training at near the individual threshold of contrast or spatial frequency (SF) for pattern discrimination or visual detection for 35 consecutive days. Following training, the contrast-threshold trained mice showed an 87% improvement in contrast sensitivity (CS) and a 55% gain in visual acuity (VA). Similarly, the SF-threshold trained mice exhibited comparable and long-lasting improvements in VA and significant gains in CS over a wide range of SFs. Furthermore, learning largely transferred across eyes and stimulus orientations. Interestingly, learning could transfer from a pattern discrimination task to a visual detection task, but not vice versa. We validated that this VPL fully restored VA in adult amblyopic mice and old mice. Taken together, these data indicate that mice, as a species, exhibit reliable VPL. Intrinsic signal optical imaging revealed that mice with perceptual training had higher cut-off SFs in primary visual cortex (V1) than those without perceptual training. Moreover, perceptual training induced an increase in the dendritic spine density in layer 2/3 pyramidal neurons of V1. These results indicated functional and structural alterations in V1 during VPL. Overall, our VPL mouse model will provide a platform for investigating the neurobiological basis of VPL.

  19. Evidence for Non-Opponent Coding of Colour Information in Human Visual Cortex: Selective Loss of "Green" Sensitivity in a Subject with Damaged Ventral Occipito-Temporal Cortex.

    PubMed

    Rauscher, Franziska G; Plant, Gordon T; James-Galton, Merle; Barbur, John L

    2011-01-01

    Damage to ventral occipito-temporal extrastriate visual cortex leads to the syndrome of prosopagnosia often with coexisting cerebral achromatopsia. A patient with this syndrome resulting in a left upper homonymous quadrantanopia, prosopagnosia, and incomplete achromatopsia is described. Chromatic sensitivity was assessed at a number of locations in the intact visual field using a dynamic luminance contrast masking technique that isolates the use of colour signals. In normal subjects chromatic detection thresholds form an elliptical contour when plotted in the Commission Internationale d'Eclairage, (x-y), chromaticity diagram. Because the extraction of colour signals in early visual processing involves opponent mechanisms, subjects with Daltonism (congenital red/green loss of sensitivity) show symmetric increase in thresholds towards the long wavelength ("red") and middle wavelength ("green") regions of the spectrum locus. This is also the case with acquired loss of chromatic sensitivity as a result of retinal or optic nerve disease. Our patient's results were an exception to this rule. Whilst his chromatic sensitivity in the central region of the visual field was reduced symmetrically for both "red/green" and "yellow/blue" directions in colour space, the subject's lower left quadrant showed a marked asymmetry in "red/green" thresholds with the greatest loss of sensitivity towards the "green" region of the spectrum locus. This spatially localized asymmetric loss of "green" but not "red" sensitivity has not been reported previously in human vision. Such loss is consistent with selective damage of neural substrates in the visual cortex that process colour information, but are spectrally non-opponent.

  20. The role of the amygdala and the basal ganglia in visual processing of central vs. peripheral emotional content.

    PubMed

    Almeida, Inês; van Asselen, Marieke; Castelo-Branco, Miguel

    2013-09-01

    In human cognition, most relevant stimuli, such as faces, are processed in central vision. However, it is widely believed that recognition of relevant stimuli (e.g. threatening animal faces) at peripheral locations is also important due to their survival value. Moreover, task instructions have been shown to modulate brain regions involved in threat recognition (e.g. the amygdala). In this respect it is also controversial whether tasks requiring explicit focus on stimulus threat content vs. implicit processing differently engage primitive subcortical structures involved in emotional appraisal. Here we have addressed the role of central vs. peripheral processing in the human amygdala using animal threatening vs. non-threatening face stimuli. First, a simple animal face recognition task with threatening and non-threatening animal faces, as well as non-face control stimuli, was employed in naïve subjects (implicit task). A subsequent task was then performed with the same stimulus categories (but different stimuli) in which subjects were told to explicitly detect threat signals. We found lateralized amygdala responses both to the spatial location of stimuli and to the threatening content of faces depending on the task performed: the right amygdala showed increased responses to central compared to left presented stimuli specifically during the threat detection task, while the left amygdala was better prone to discriminate threatening faces from non-facial displays during the animal face recognition task. Additionally, the right amygdala responded to faces during the threat detection task but only when centrally presented. Moreover, we have found no evidence for superior responses of the amygdala to peripheral stimuli. Importantly, we have found that striatal regions activate differentially depending on peripheral vs. central processing of threatening faces. Accordingly, peripheral processing of these stimuli activated more strongly the putaminal region, while central processing engaged mainly the caudate nucleus. We conclude that the human amygdala has a central bias for face stimuli, and that visual processing recruits different striatal regions, putaminal or caudate based, depending on the task and on whether peripheral or central visual processing is involved. © 2013 Elsevier Ltd. All rights reserved.

  1. Rapid visual and spectrophotometric nitrite detection by cyclometalated ruthenium complex.

    PubMed

    Lo, Hoi-Shing; Lo, Ka-Wai; Yeung, Chi-Fung; Wong, Chun-Yuen

    2017-10-16

    Quantitative determination of nitrite ion (NO 2 - ) is of great importance in environmental and clinical investigations. A rapid visual and spectrophotometric assay for NO 2 - detection was developed based on a newly designed ruthenium complex, [Ru(npy)([9]aneS3)(CO)](ClO 4 ) (denoted as RuNPY; npy = 2-(1-naphthyl)pyridine, [9]aneS3 = 1,4,7-trithiacyclononane). This complex traps NO + produced in acidified NO 2 - solution, and yields observable color change within 1 min at room temperature. The assay features excellent dynamic range (1-840 μmol L -1 ) and high selectivity, and its limit of detection (0.39 μmol L -1 ) is also well below the guideline values for drinking water recommended by WHO and U.S. EPA. Practical use of this assay in tap water and human urine was successfully demonstrated. Overall, the rapidity and selectivity of this assay overcome the problems suffered by the commonly used modified Griess assays for nitrite determination. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Assessment of in vitro killing assays for detecting praziquantel-induced death in Posthodiplostomum minimum metacercariae.

    PubMed

    Bader, Chris; Jesudoss Chelladurai, Jeba; Starling, David E; Jones, Douglas E; Brewer, Matthew T

    2017-10-01

    Control of parasitic infections may be achieved by eliminating developmental stages present within intermediate hosts, thereby disrupting the parasite life cycle. For several trematodes relevant to human and veterinary medicine, this involves targeting the metacercarial stage found in fish intermediate hosts. Treatment of fish with praziquantel is one potential approach for targeting the metacercaria stage. To date, studies investigating praziquantel-induced metacercarial death in fish rely on counting parasites and visually assessing morphology or movement. In this study, we investigate quantitative methods for detecting praziquantel-induced death using a Posthodiplostomum minimum model. Our results revealed that propidium iodide staining accurately identified praziquantel-induced death and the level of staining was proportional to the concentration of praziquantel. In contrast, detection of ATP, resazurin metabolism, and trypan blue staining were poor indicators of metacercarial death. The propidium iodide method offers an advantage over simple visualization of parasite movement and could be used to determine EC 50 values relevant for comparison of praziquantel sensitivity or resistance. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Spectral saliency via automatic adaptive amplitude spectrum analysis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Dai, Jialun; Zhu, Yafei; Zheng, Haiyong; Qiao, Xiaoyan

    2016-03-01

    Suppressing nonsalient patterns by smoothing the amplitude spectrum at an appropriate scale has been shown to effectively detect the visual saliency in the frequency domain. Different filter scales are required for different types of salient objects. We observe that the optimal scale for smoothing amplitude spectrum shares a specific relation with the size of the salient region. Based on this observation and the bottom-up saliency detection characterized by spectrum scale-space analysis for natural images, we propose to detect visual saliency, especially with salient objects of different sizes and locations via automatic adaptive amplitude spectrum analysis. We not only provide a new criterion for automatic optimal scale selection but also reserve the saliency maps corresponding to different salient objects with meaningful saliency information by adaptive weighted combination. The performance of quantitative and qualitative comparisons is evaluated by three different kinds of metrics on the four most widely used datasets and one up-to-date large-scale dataset. The experimental results validate that our method outperforms the existing state-of-the-art saliency models for predicting human eye fixations in terms of accuracy and robustness.

  4. Human visual function in the North Carolina clinical study on possible estuary-associated syndrome.

    PubMed

    Hudnell, H K; House, D; Schmid, J; Koltai, D; Stopford, W; Wilkins, J; Savitz, D A; Swinker, M; Music, S

    2001-04-20

    The U.S. Environmental Protection Agency assisted the North Carolina Department of Health and Human Services in conducting a study to investigate the potential for an association between fish kills in the North Carolina estuary system and the risk for persistent health effects. Impetus for the study was recent evidence suggesting that estuarine dinoflagellates, including members of the toxic Pfiesteria complex (TPC), P. piscicida and P. schumwayae, may release a toxin(s) that kills fish and adversely affects human health. This report describes one component of the study in which visual system function was assessed. Participants working primarily in estuaries inhabited by TPC or in off-shore waters thought not to contain TPC were studied. The potentially exposed estuary (n = 22) and unexposed offshore (n = 20) workers were matched for age, gender, and education. Visual acuity did not differ significantly between the cohorts, but visual contrast sensitivity (VCS), an indicator of visual pattern-detection ability for stimuli of various sizes, was significantly reduced by about 30% in the estuary relative to the offshore cohort. A further analysis that excluded participants having a history possibly predictive of neuropsychological impairment showed a similar VCS reduction. Additional analyses indicated that differences between the cohorts in age, education, smoking, alcohol consumption, and total time spent on any water did not account for the difference in VCS. Exploratory analyses suggested a possible association between the magnitude of VCS reduction and hours spent in contact with a fish kill. The profile of VCS deficit across stimulus sizes resembled that seen in organic solvent-exposed workers, but an assessment of occupational solvent, and other neurotoxicant, exposures did not indicate differences between the cohorts. These results suggest that factor(s) associated with the North Carolina estuaries, including the possibility of exposure to TPC toxin(s), may impair visual system function.

  5. Structural and functional correlates of visual field asymmetry in the human brain by diffusion kurtosis MRI and functional MRI.

    PubMed

    O'Connell, Caitlin; Ho, Leon C; Murphy, Matthew C; Conner, Ian P; Wollstein, Gadi; Cham, Rakie; Chan, Kevin C

    2016-11-09

    Human visual performance has been observed to show superiority in localized regions of the visual field across many classes of stimuli. However, the underlying neural mechanisms remain unclear. This study aims to determine whether the visual information processing in the human brain is dependent on the location of stimuli in the visual field and the corresponding neuroarchitecture using blood-oxygenation-level-dependent functional MRI (fMRI) and diffusion kurtosis MRI, respectively, in 15 healthy individuals at 3 T. In fMRI, visual stimulation to the lower hemifield showed stronger brain responses and larger brain activation volumes than the upper hemifield, indicative of the differential sensitivity of the human brain across the visual field. In diffusion kurtosis MRI, the brain regions mapping to the lower visual field showed higher mean kurtosis, but not fractional anisotropy or mean diffusivity compared with the upper visual field. These results suggested the different distributions of microstructural organization across visual field brain representations. There was also a strong positive relationship between diffusion kurtosis and fMRI responses in the lower field brain representations. In summary, this study suggested the structural and functional brain involvements in the asymmetry of visual field responses in humans, and is important to the neurophysiological and psychological understanding of human visual information processing.

  6. Learning to predict where human gaze is using quaternion DCT based regional saliency detection

    NASA Astrophysics Data System (ADS)

    Li, Ting; Xu, Yi; Zhang, Chongyang

    2014-09-01

    Many current visual attention approaches used semantic features to accurately capture human gaze. However, these approaches demand high computational cost and can hardly be applied to daily use. Recently, some quaternion-based saliency detection models, such as PQFT (phase spectrum of Quaternion Fourier Transform), QDCT (Quaternion Discrete Cosine Transform), have been proposed to meet real-time requirement of human gaze tracking tasks. However, current saliency detection methods used global PQFT and QDCT to locate jump edges of the input, which can hardly detect the object boundaries accurately. To address the problem, we improved QDCT-based saliency detection model by introducing superpixel-wised regional saliency detection mechanism. The local smoothness of saliency value distribution is emphasized to distinguish noises of background from salient regions. Our algorithm called saliency confidence can distinguish the patches belonging to the salient object and those of the background. It decides whether the image patches belong to the same region. When an image patch belongs to a region consisting of other salient patches, this patch should be salient as well. Therefore, we use saliency confidence map to get background weight and foreground weight to do the optimization on saliency map obtained by QDCT. The optimization is accomplished by least square method. The optimization approach we proposed unifies local and global saliency by combination of QDCT and measuring the similarity between each image superpixel. We evaluate our model on four commonly-used datasets (Toronto, MIT, OSIE and ASD) using standard precision-recall curves (PR curves), the mean absolute error (MAE) and area under curve (AUC) measures. In comparison with most state-of-art models, our approach can achieve higher consistency with human perception without training. It can get accurate human gaze even in cluttered background. Furthermore, it achieves better compromise between speed and accuracy.

  7. Incorporation of operator knowledge for improved HMDS GPR classification

    NASA Astrophysics Data System (ADS)

    Kennedy, Levi; McClelland, Jessee R.; Walters, Joshua R.

    2012-06-01

    The Husky Mine Detection System (HMDS) detects and alerts operators to potential threats observed in groundpenetrating RADAR (GPR) data. In the current system architecture, the classifiers have been trained using available data from multiple training sites. Changes in target types, clutter types, and operational conditions may result in statistical differences between the training data and the testing data for the underlying features used by the classifier, potentially resulting in an increased false alarm rate or a lower probability of detection for the system. In the current mode of operation, the automated detection system alerts the human operator when a target-like object is detected. The operator then uses data visualization software, contextual information, and human intuition to decide whether the alarm presented is an actual target or a false alarm. When the statistics of the training data and the testing data are mismatched, the automated detection system can overwhelm the analyst with an excessive number of false alarms. This is evident in the performance of and the data collected from deployed systems. This work demonstrates that analyst feedback can be successfully used to re-train a classifier to account for variable testing data statistics not originally captured in the initial training data.

  8. Visual search for changes in scenes creates long-term, incidental memory traces.

    PubMed

    Utochkin, Igor S; Wolfe, Jeremy M

    2018-05-01

    Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.

  9. A method to determine the impact of reduced visual function on nodule detection performance.

    PubMed

    Thompson, J D; Lança, C; Lança, L; Hogg, P

    2017-02-01

    In this study we aim to validate a method to assess the impact of reduced visual function and observer performance concurrently with a nodule detection task. Three consultant radiologists completed a nodule detection task under three conditions: without visual defocus (0.00 Dioptres; D), and with two different magnitudes of visual defocus (-1.00 D and -2.00 D). Defocus was applied with lenses and visual function was assessed prior to each image evaluation. Observers evaluated the same cases on each occasion; this comprised of 50 abnormal cases containing 1-4 simulated nodules (5, 8, 10 and 12 mm spherical diameter, 100 HU) placed within a phantom, and 25 normal cases (images containing no nodules). Data was collected under the free-response paradigm and analysed using Rjafroc. A difference in nodule detection performance would be considered significant at p < 0.05. All observers had acceptable visual function prior to beginning the nodule detection task. Visual acuity was reduced to an unacceptable level for two observers when defocussed to -1.00 D and for one observer when defocussed to -2.00 D. Stereoacuity was unacceptable for one observer when defocussed to -2.00 D. Despite unsatisfactory visual function in the presence of defocus we were unable to find a statistically significant difference in nodule detection performance (F(2,4) = 3.55, p = 0.130). A method to assess visual function and observer performance is proposed. In this pilot evaluation we were unable to detect any difference in nodule detection performance when using lenses to reduce visual function. Copyright © 2016 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.

  10. Bowel perforation detection using metabolic fluorescent chlorophylls

    NASA Astrophysics Data System (ADS)

    Han, Jung Hyun; Jo, Young Goun; Kim, Jung Chul; Choi, Sujeong; Kang, Hoonsoo; Kim, Yong-Chul; Hwang, In-Wook

    2016-03-01

    Thus far, there have been tries of detection of disease using fluorescent materials. We introduce the chlorophyll derivatives from food plants, which have longer-wavelength emissions (at >650 nm) than those of fluorescence of tissues and organs, for detection of bowel perforation. To figure out the possibility of fluorescence spectroscopy as a monitoring sensor of bowel perforation, fluorescence from organs of rodent models, intestinal and peritoneal fluids of rodent models and human were analyzed. In IVIS fluorescence image of rodent abdominal organ, visualization of perforated area only was possible when threshold of image is extremely finely controlled. Generally, both perforated area of bowel and normal bowel which filled with large amount of chlorophyll derivatives were visualized with fluorescence. The fluorescence from chlorophyll derivatives penetrated through the normal bowel wall makes difficult to distinguish perforation area from normal bowel with direct visualization of fluorescence. However, intestinal fluids containing chlorophyll derivatives from food contents can leak from perforation sites in situation of bowel perforation. It may show brighter and longer-wavelength regime emissions of chlorophyll derivatives than those of pure peritoneal fluid or bioorgans. Peritoneal fluid mixed with intestinal fluids show much brighter emissions in longer wavelength (at>650 nm) than those of pure peritoneal fluid. In addition, irrigation fluid, which is used for the cleansing of organ and peritoneal cavity, made of mixed intestinal and peritoneal fluid diluted with physiologic saline also can be monitored bowel perforation during surgery.

  11. Recognition and localization of relevant human behavior in videos

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Burghouts, Gertjan; de Penning, Leo; Hanckmann, Patrick; ten Hove, Johan-Martijn; Korzec, Sanne; Kruithof, Maarten; Landsmeer, Sander; van Leeuwen, Coen; van den Broek, Sebastiaan; Halma, Arvid; den Hollander, Richard; Schutte, Klamer

    2013-06-01

    Ground surveillance is normally performed by human assets, since it requires visual intelligence. However, especially for military operations, this can be dangerous and is very resource intensive. Therefore, unmanned autonomous visualintelligence systems are desired. In this paper, we present an improved system that can recognize actions of a human and interactions between multiple humans. Central to the new system is our agent-based architecture. The system is trained on thousands of videos and evaluated on realistic persistent surveillance data in the DARPA Mind's Eye program, with hours of videos of challenging scenes. The results show that our system is able to track the people, detect and localize events, and discriminate between different behaviors, and it performs 3.4 times better than our previous system.

  12. Object grouping based on real-world regularities facilitates perception by reducing competitive interactions in visual cortex

    PubMed Central

    Kaiser, Daniel; Stein, Timo; Peelen, Marius V.

    2014-01-01

    In virtually every real-life situation humans are confronted with complex and cluttered visual environments that contain a multitude of objects. Because of the limited capacity of the visual system, objects compete for neural representation and cognitive processing resources. Previous work has shown that such attentional competition is partly object based, such that competition among elements is reduced when these elements perceptually group into an object based on low-level cues. Here, using functional MRI (fMRI) and behavioral measures, we show that the attentional benefit of grouping extends to higher-level grouping based on the relative position of objects as experienced in the real world. An fMRI study designed to measure competitive interactions among objects in human visual cortex revealed reduced neural competition between objects when these were presented in commonly experienced configurations, such as a lamp above a table, relative to the same objects presented in other configurations. In behavioral visual search studies, we then related this reduced neural competition to improved target detection when distracter objects were shown in regular configurations. Control studies showed that low-level grouping could not account for these results. We interpret these findings as reflecting the grouping of objects based on higher-level spatial-relational knowledge acquired through a lifetime of seeing objects in specific configurations. This interobject grouping effectively reduces the number of objects that compete for representation and thereby contributes to the efficiency of real-world perception. PMID:25024190

  13. Blindness and visual impairment in opera.

    PubMed

    Aydin, Pinar; Ritch, Robert; O'Dwyer, John

    2018-01-01

    The performing arts mirror the human condition. This study sought to analyze the reasons for inclusion of visually impaired characters in opera, the cause of the blindness or near blindness, and the dramatic purpose of the blindness in the storyline. We reviewed operas from the 18 th century to 2010 and included all characters with ocular problems. We classified the cause of each character's ocular problem (organic, nonorganic, and other) in relation to the thematic setting of the opera: biblical and mythical, blind beggars or blind musicians, historical (real or fictional characters), and contemporary or futuristic. Cases of blindness in 55 characters (2 as a choir) from 38 operas were detected over 3 centuries of repertoire: 11 had trauma-related visual impairment, 5 had congenital blindness, 18 had visual impairment of unknown cause, 9 had psychogenic or malingering blindness, and 12 were symbolic or miracle-related. One opera featured an ophthalmologist curing a patient. The research illustrates that visual impairment was frequently used as an artistic device to enhance the intent and situate an opera in its time.

  14. Chromatic and Achromatic Spatial Resolution of Local Field Potentials in Awake Cortex

    PubMed Central

    Jansen, Michael; Li, Xiaobing; Lashgari, Reza; Kremkow, Jens; Bereshpolova, Yulia; Swadlow, Harvey A.; Zaidi, Qasim; Alonso, Jose-Manuel

    2015-01-01

    Local field potentials (LFPs) have become an important measure of neuronal population activity in the brain and could provide robust signals to guide the implant of visual cortical prosthesis in the future. However, it remains unclear whether LFPs can detect weak cortical responses (e.g., cortical responses to equiluminant color) and whether they have enough visual spatial resolution to distinguish different chromatic and achromatic stimulus patterns. By recording from awake behaving macaques in primary visual cortex, here we demonstrate that LFPs respond robustly to pure chromatic stimuli and exhibit ∼2.5 times lower spatial resolution for chromatic than achromatic stimulus patterns, a value that resembles the ratio of achromatic/chromatic resolution measured with psychophysical experiments in humans. We also show that, although the spatial resolution of LFP decays with visual eccentricity as is also the case for single neurons, LFPs have higher spatial resolution and show weaker response suppression to low spatial frequencies than spiking multiunit activity. These results indicate that LFP recordings are an excellent approach to measure spatial resolution from local populations of neurons in visual cortex including those responsive to color. PMID:25416722

  15. Causal capture effects in chimpanzees (Pan troglodytes).

    PubMed

    Matsuno, Toyomi; Tomonaga, Masaki

    2017-01-01

    Extracting a cause-and-effect structure from the physical world is an important demand for animals living in dynamically changing environments. Human perceptual and cognitive mechanisms are known to be sensitive and tuned to detect and interpret such causal structures. In contrast to rigorous investigations of human causal perception, the phylogenetic roots of this perception are not well understood. In the present study, we aimed to investigate the susceptibility of nonhuman animals to mechanical causality by testing whether chimpanzees perceived an illusion called causal capture (Scholl & Nakayama, 2002). Causal capture is a phenomenon in which a type of bistable visual motion of objects is perceived as causal collision due to a bias from a co-occurring causal event. In our experiments, we assessed the susceptibility of perception of a bistable stream/bounce motion event to a co-occurring causal event in chimpanzees. The results show that, similar to in humans, causal "bounce" percepts were significantly increased in chimpanzees with the addition of a task-irrelevant causal bounce event that was synchronously presented. These outcomes suggest that the perceptual mechanisms behind the visual interpretation of causal structures in the environment are evolutionarily shared between human and nonhuman animals. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Real-time imaging of single neuronal cell apoptosis in patients with glaucoma

    PubMed Central

    Normando, Eduardo M.; Cardoso, M. Jorge; Miodragovic, Serge; Jeylani, Seham; Davis, Benjamin M.; Guo, Li; Ourselin, Sebastien; A’Hern, Roger; Bloom, Philip A.

    2017-01-01

    Abstract See Herms and Schön (doi:10.1093/brain/awx100) for a scientific commentary on this article. Retinal cell apoptosis occurs in many ocular neurodegenerative conditions including glaucoma—the major cause of irreversible blindness worldwide. Using a new imaging technique that we have called DARC (detection of apoptosing retinal cells), which until now has only been demonstrated in animal models, we assessed if annexin 5 labelled with fluorescent dye DY-776 (ANX776) could be used safely in humans to identify retinal cell apoptosis. Eight patients with glaucomatous neurodegeneration and evidence of progressive disease, and eight healthy subjects were randomly assigned to intravenous ANX776 doses of 0.1, 0.2, 0.4 and 0.5 mg in an open-label, phase 1 clinical trial. In addition to assessing the safety, tolerability and pharmacokinetics of ANX776, the study aimed to explore whether DARC could successfully visualize individual retinal cell apoptosis in vivo in humans, with the DARC count defined as the total number of unique ANX776-labelled spots. DARC enabled retinal cell apoptosis to be identified in the human retina using ANX776. Single ANX776-labelled cells were visualized in a dose-dependent pattern (P < 0.001) up to 6 h after injection. The DARC count was significantly higher (2.37-fold, 95% confidence interval: 1.4–4.03, P = 0.003) in glaucoma patients compared to healthy controls, and was significantly (P = 0.045) greater in patients who later showed increasing rates of disease progression, based on either optic disc, retinal nerve fibre layer or visual field parameters. Additionally, the DARC count significantly correlated with decreased central corneal thickness (Spearman’s R = −0.68, P = 0.006) and increased cup-disc ratios (Spearman’s R = 0.47, P = 0.038) in glaucoma patients and with increased age (Spearman’s R = 0.77, P = 0.001) in healthy controls. Finally, ANX776 was found to be safe and well-tolerated with no serious adverse events, and a short half-life (10–36 min). This proof-of-concept study demonstrates that retinal cell apoptosis can be identified in the human retina with increased levels of activity in glaucomatous neurodegenerative disease. To our knowledge, this is the first time individual neuronal apoptosis has been visualized in vivo in humans and is the first demonstration of detection of individual apoptotic cells in a neurodegenerative disease. Furthermore, our results suggest the level of apoptosis (‘DARC count’) is predictive of disease activity, indicating the potential of DARC as a surrogate marker. Although further trials are clearly needed, this study validates experimental findings supporting the use of DARC as a method of detection and monitoring of patients with glaucomatous neurodegeneration, where retinal ganglion cell apoptosis is an established process and where there is a real need for tools to non-invasively assess treatment efficacy. PMID:28449038

  17. The touchscreen operant platform for assessing executive function in rats and mice

    PubMed Central

    Mar, Adam C.; Horner, Alexa E.; Nilsson, Simon R.O.; Alsiö, Johan; Kent, Brianne A.; Kim, Chi Hun; Holmes, Andrew; Saksida, Lisa M.; Bussey, Timothy J.

    2014-01-01

    Summary This protocol details a subset of assays developed within the touchscreen platform to measure aspects of executive function in rodents. Three main procedures are included: Extinction, measuring the rate and extent of curtailing a response that was previously, but is no longer, associated with reward; Reversal Learning, measuring the rate and extent of switching a response toward a visual stimulus that was previously not, but has become, associated with reward (and away from a visual stimulus that was previously, but is no longer, rewarded); and the 5-Choice Serial Reaction Time (5-CSRT) task, gauging the ability to selectively detect and appropriately respond to briefly presented, spatially unpredictable visual stimuli. These methods were designed to assess both complimentary and overlapping constructs including selective and divided visual attention, inhibitory control, flexibility, impulsivity and compulsivity. The procedures comprise part of a wider touchscreen test battery assessing cognition in rodents with high potential for translation to human studies. PMID:24051960

  18. Assessing morphology and function of the semicircular duct system: introducing new in-situ visualization and software toolbox

    PubMed Central

    David, R.; Stoessel, A.; Berthoz, A.; Spoor, F.; Bennequin, D.

    2016-01-01

    The semicircular duct system is part of the sensory organ of balance and essential for navigation and spatial awareness in vertebrates. Its function in detecting head rotations has been modelled with increasing sophistication, but the biomechanics of actual semicircular duct systems has rarely been analyzed, foremost because the fragile membranous structures in the inner ear are hard to visualize undistorted and in full. Here we present a new, easy-to-apply and non-invasive method for three-dimensional in-situ visualization and quantification of the semicircular duct system, using X-ray micro tomography and tissue staining with phosphotungstic acid. Moreover, we introduce Ariadne, a software toolbox which provides comprehensive and improved morphological and functional analysis of any visualized duct system. We demonstrate the potential of these methods by presenting results for the duct system of humans, the squirrel monkey and the rhesus macaque, making comparisons with past results from neurophysiological, oculometric and biomechanical studies. Ariadne is freely available at http://www.earbank.org. PMID:27604473

  19. Top-Down Visual Saliency via Joint CRF and Dictionary Learning.

    PubMed

    Yang, Jimei; Yang, Ming-Hsuan

    2017-03-01

    Top-down visual saliency is an important module of visual attention. In this work, we propose a novel top-down saliency model that jointly learns a Conditional Random Field (CRF) and a visual dictionary. The proposed model incorporates a layered structure from top to bottom: CRF, sparse coding and image patches. With sparse coding as an intermediate layer, CRF is learned in a feature-adaptive manner; meanwhile with CRF as the output layer, the dictionary is learned under structured supervision. For efficient and effective joint learning, we develop a max-margin approach via a stochastic gradient descent algorithm. Experimental results on the Graz-02 and PASCAL VOC datasets show that our model performs favorably against state-of-the-art top-down saliency methods for target object localization. In addition, the dictionary update significantly improves the performance of our model. We demonstrate the merits of the proposed top-down saliency model by applying it to prioritizing object proposals for detection and predicting human fixations.

  20. An evaluation of attention models for use in SLAM

    NASA Astrophysics Data System (ADS)

    Dodge, Samuel; Karam, Lina

    2013-12-01

    In this paper we study the application of visual saliency models for the simultaneous localization and mapping (SLAM) problem. We consider visual SLAM, where the location of the camera and a map of the environment can be generated using images from a single moving camera. In visual SLAM, the interest point detector is of key importance. This detector must be invariant to certain image transformations so that features can be matched across di erent frames. Recent work has used a model of human visual attention to detect interest points, however it is unclear as to what is the best attention model for this purpose. To this aim, we compare the performance of interest points from four saliency models (Itti, GBVS, RARE, and AWS) with the performance of four traditional interest point detectors (Harris, Shi-Tomasi, SIFT, and FAST). We evaluate these detectors under several di erent types of image transformation and nd that the Itti saliency model, in general, achieves the best performance in terms of keypoint repeatability.

  1. Alpha-band rhythm modulation under the condition of subliminal face presentation: MEG study.

    PubMed

    Sakuraba, Satoshi; Kobayashi, Hana; Sakai, Shinya; Yokosawa, Koichi

    2013-01-01

    The human brain has two streams to process visual information: a dorsal stream and a ventral stream. Negative potential N170 or its magnetic counterpart M170 is known as the face-specific signal originating from the ventral stream. It is possible to present a visual image unconsciously by using continuous flash suppression (CFS), which is a visual masking technique adopting binocular rivalry. In this work, magnetoencephalograms were recorded during presentation of the three invisible images: face images, which are processed by the ventral stream; tool images, which could be processed by the dorsal stream, and a blank image. Alpha-band activities detected by sensors that are sensitive to M170 were compared. The alpha-band rhythm was suppressed more during presentation of face images than during presentation of the blank image (p=.028). The suppression remained for about 1 s after ending presentations. However, no significant difference was observed between tool and other images. These results suggest that alpha-band rhythm can be modulated also by unconscious visual images.

  2. Visual motion detection and habitat preference in Anolis lizards.

    PubMed

    Steinberg, David S; Leal, Manuel

    2016-11-01

    The perception of visual stimuli has been a major area of inquiry in sensory ecology, and much of this work has focused on coloration. However, for visually oriented organisms, the process of visual motion detection is often equally crucial to survival and reproduction. Despite the importance of motion detection to many organisms' daily activities, the degree of interspecific variation in the perception of visual motion remains largely unexplored. Furthermore, the factors driving this potential variation (e.g., ecology or evolutionary history) along with the effects of such variation on behavior are unknown. We used a behavioral assay under laboratory conditions to quantify the visual motion detection systems of three species of Puerto Rican Anolis lizard that prefer distinct structural habitat types. We then compared our results to data previously collected for anoles from Cuba, Puerto Rico, and Central America. Our findings indicate that general visual motion detection parameters are similar across species, regardless of habitat preference or evolutionary history. We argue that these conserved sensory properties may drive the evolution of visual communication behavior in this clade.

  3. Blue Whale Visual and Acoustic Encounter Rates in the Southern California Bight

    DTIC Science & Technology

    2007-07-01

    blue whale (Balaenoptera musculus) visual and acoustic encounter rates was quantitatively evaluated using hourly counts of detected whales during...surveys occurring in April, there were visual and acoustic detections of blue whales in all surveyed months and regions. Encounter rate is...difference between acoustic encounters of singing whales and visual encounters suggest seasonal variation in the ability of each method to detect blue

  4. Attentional bias to briefly presented emotional distractors follows a slow time course in visual cortex.

    PubMed

    Müller, Matthias M; Andersen, Søren K; Hindi Attar, Catherine

    2011-11-02

    A central controversy in the field of attention is how the brain deals with emotional distractors and to what extent they capture attentional processing resources reflexively due to their inherent significance for guidance of adaptive behavior and survival. Especially, the time course of competitive interactions in early visual areas and whether masking of briefly presented emotional stimuli can inhibit biasing of processing resources in these areas is currently unknown. We recorded frequency-tagged potentials evoked by a flickering target detection task in the foreground of briefly presented emotional or neutral pictures that were followed by a mask in human subjects. We observed greater competition for processing resources in early visual cortical areas with shortly presented emotional relative to neutral pictures ~275 ms after picture offset. This was paralleled by a reduction of target detection rates in trials with emotional pictures ~400 ms after picture offset. Our finding that briefly presented emotional distractors are able to bias attention well after their offset provides evidence for a rather slow feedback or reentrant neural competition mechanism for emotional distractors that continues after the offset of the emotional stimulus.

  5. Contributions of visual and embodied expertise to body perception.

    PubMed

    Reed, Catherine L; Nyberg, Andrew A; Grubb, Jefferson D

    2012-01-01

    Recent research has demonstrated that our perception of the human body differs from that of inanimate objects. This study investigated whether the visual perception of the human body differs from that of other animate bodies and, if so, whether that difference could be attributed to visual experience and/or embodied experience. To dissociate differential effects of these two types of expertise, inversion effects (recognition of inverted stimuli is slower and less accurate than recognition of upright stimuli) were compared for two types of bodies in postures that varied in typicality: humans in human postures (human-typical), humans in dog postures (human-atypical), dogs in dog postures (dog-typical), and dogs in human postures (dog-atypical). Inversion disrupts global configural processing. Relative changes in the size and presence of inversion effects reflect changes in visual processing. Both visual and embodiment expertise predict larger inversion effects for human over dog postures because we see humans more and we have experience producing human postures. However, our design that crosses body type and typicality leads to distinct predictions for visual and embodied experience. Visual expertise predicts an interaction between typicality and orientation: greater inversion effects should be found for typical over atypical postures regardless of body type. Alternatively, embodiment expertise predicts a body, typicality, and orientation interaction: larger inversion effects should be found for all human postures but only for atypical dog postures because humans can map their bodily experience onto these postures. Accuracy data supported embodiment expertise with the three-way interaction. However, response-time data supported contributions of visual expertise with larger inversion effects for typical over atypical postures. Thus, both types of expertise affect the visual perception of bodies.

  6. High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.

    PubMed

    Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min

    2012-01-01

    The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.

  7. Change blindness and visual memory: visual representations get rich and act poor.

    PubMed

    Varakin, D Alexander; Levin, Daniel T

    2006-02-01

    Change blindness is often taken as evidence that visual representations are impoverished, while successful recognition of specific objects is taken as evidence that they are richly detailed. In the current experiments, participants performed cover tasks that required each object in a display to be attended. Change detection trials were unexpectedly introduced and surprise recognition tests were given for nonchanging displays. For both change detection and recognition, participants had to distinguish objects from the same basic-level category, making it likely that specific visual information had to be used for successful performance. Although recognition was above chance, incidental change detection usually remained at floor. These results help reconcile demonstrations of poor change detection with demonstrations of good memory because they suggest that the capability to store visual information in memory is not reflected by the visual system's tendency to utilize these representations for purposes of detecting unexpected changes.

  8. The accuracy of confrontation visual field test in comparison with automated perimetry.

    PubMed Central

    Johnson, L. N.; Baloh, F. G.

    1991-01-01

    The accuracy of confrontation visual field testing was determined for 512 visual fields using automated static perimetry as the reference standard. The sensitivity of confrontation testing excluding patchy defects was 40% for detecting anterior visual field defects, 68.3% for posterior defects, and 50% for both anterior and posterior visual field defects combined. The sensitivity within each group varied depending on the type of visual field defect encountered. Confrontation testing had a high sensitivity (75% to 100%) for detecting altitudinal visual loss, central/centrocecal scotoma, and homonymous hemianopsia. Confrontation testing was fairly insensitive (20% to 50% sensitivity) for detecting arcuate scotoma and bitemporal hemianopsia. The specificity of confrontation testing was high at 93.4%. The high positive predictive value (72.6%) and negative predictive value (75.7%) would indicate that visual field defects identified during confrontation testing are often true visual field defects. However, the many limitations of confrontation testing should be remembered, particularly its low sensitivity for detecting visual field loss associated with parasellar tumors, glaucoma, and compressive optic neuropathies. PMID:1800764

  9. Visual Detection Under Uncertainty Operates Via an Early Static, Not Late Dynamic, Non-Linearity

    PubMed Central

    Neri, Peter

    2010-01-01

    Signals in the environment are rarely specified exactly: our visual system may know what to look for (e.g., a specific face), but not its exact configuration (e.g., where in the room, or in what orientation). Uncertainty, and the ability to deal with it, is a fundamental aspect of visual processing. The MAX model is the current gold standard for describing how human vision handles uncertainty: of all possible configurations for the signal, the observer chooses the one corresponding to the template associated with the largest response. We propose an alternative model in which the MAX operation, which is a dynamic non-linearity (depends on multiple inputs from several stimulus locations) and happens after the input stimulus has been matched to the possible templates, is replaced by an early static non-linearity (depends only on one input corresponding to one stimulus location) which is applied before template matching. By exploiting an integrated set of analytical and experimental tools, we show that this model is able to account for a number of empirical observations otherwise unaccounted for by the MAX model, and is more robust with respect to the realistic limitations imposed by the available neural hardware. We then discuss how these results, currently restricted to a simple visual detection task, may extend to a wider range of problems in sensory processing. PMID:21212835

  10. Differential contribution of early visual areas to the perceptual process of contour processing.

    PubMed

    Schira, Mark M; Fahle, Manfred; Donner, Tobias H; Kraft, Antje; Brandt, Stephan A

    2004-04-01

    We investigated contour processing and figure-ground detection within human retinotopic areas using event-related functional magnetic resonance imaging (fMRI) in 6 healthy and naïve subjects. A figure (6 degrees side length) was created by a 2nd-order texture contour. An independent and demanding foveal letter-discrimination task prevented subjects from noticing this more peripheral contour stimulus. The contour subdivided our stimulus into a figure and a ground. Using localizers and retinotopic mapping stimuli we were able to subdivide each early visual area into 3 eccentricity regions corresponding to 1) the central figure, 2) the area along the contour, and 3) the background. In these subregions we investigated the hemodynamic responses to our stimuli and compared responses with or without the contour defining the figure. No contour-related blood oxygenation level-dependent modulation in early visual areas V1, V3, VP, and MT+ was found. Significant signal modulation in the contour subregions of V2v, V2d, V3a, and LO occurred. This activation pattern was different from comparable studies, which might be attributable to the letter-discrimination task reducing confounding attentional modulation. In V3a, but not in any other retinotopic area, signal modulation corresponding to the central figure could be detected. Such contextual modulation will be discussed in light of the recurrent processing hypothesis and the role of visual awareness.

  11. Structural and Functional Correlates of Visual Field Asymmetry in the Human Brain by Diffusion Kurtosis MRI and Functional MRI

    PubMed Central

    O’Connell, Caitlin; Ho, Leon C.; Murphy, Matthew C.; Conner, Ian P.; Wollstein, Gadi; Cham, Rakie; Chan, Kevin C.

    2016-01-01

    Human visual performance has been observed to exhibit superiority in localized regions of the visual field across many classes of stimuli. However, the underlying neural mechanisms remain unclear. This study aims to determine if the visual information processing in the human brain is dependent on the location of stimuli in the visual field and the corresponding neuroarchitecture using blood-oxygenation-level-dependent functional MRI (fMRI) and diffusion kurtosis MRI (DKI), respectively in 15 healthy individuals at 3 Tesla. In fMRI, visual stimulation to the lower hemifield showed stronger brain responses and larger brain activation volumes than the upper hemifield, indicative of the differential sensitivity of the human brain across the visual field. In DKI, the brain regions mapping to the lower visual field exhibited higher mean kurtosis but not fractional anisotropy or mean diffusivity when compared to the upper visual field. These results suggested the different distributions of microstructural organization across visual field brain representations. There was also a strong positive relationship between diffusion kurtosis and fMRI responses in the lower field brain representations. In summary, this study suggested the structural and functional brain involvements in the asymmetry of visual field responses in humans, and is important to the neurophysiological and psychological understanding of human visual information processing. PMID:27631541

  12. Functional Relationships for Investigating Cognitive Processes

    PubMed Central

    Wright, Anthony A.

    2013-01-01

    Functional relationships (from systematic manipulation of critical variables) are advocated for revealing fundamental processes of (comparative) cognition—through examples from my work in psychophysics, learning, and memory. Functional relationships for pigeon wavelength (hue) discrimination revealed best discrimination at the spectral points of hue transition for pigeons—a correspondence (i.e., functional relationship) similar to that for humans. Functional relationships for learning revealed: Item-specific or relational learning in matching to sample as a function of the pigeons’ sample-response requirement, and same/different abstract-concept learning as a function of the training set size for rhesus monkeys, capuchin monkeys, and pigeons. Functional relationships for visual memory revealed serial position functions (a 1st order functional relationship) that changed systematically with retention delay (a 2nd order relationship) for pigeons, capuchin monkeys, rhesus monkeys, and humans. Functional relationships for rhesus-monkey auditory memory also revealed systematic changes in serial position functions with delay, but these changes were opposite to those for visual memory. Functional relationships for proactive interference revealed interference that varied as a function of a ratio of delay times. Functional relationships for change detection memory revealed (qualitative) similarities and (quantitative) differences in human and monkey visual short term memory as a function of the number of memory items. It is concluded that these findings were made possible by varying critical variables over a substantial portion of the manipulable range to generate functions and derive relationships. PMID:23174335

  13. How cortical neurons help us see: visual recognition in the human brain

    PubMed Central

    Blumberg, Julie; Kreiman, Gabriel

    2010-01-01

    Through a series of complex transformations, the pixel-like input to the retina is converted into rich visual perceptions that constitute an integral part of visual recognition. Multiple visual problems arise due to damage or developmental abnormalities in the cortex of the brain. Here, we provide an overview of how visual information is processed along the ventral visual cortex in the human brain. We discuss how neurophysiological recordings in macaque monkeys and in humans can help us understand the computations performed by visual cortex. PMID:20811161

  14. Switching on fluorescence for selective visual recognition of naringenin and morin with a metal-organic coordination polymer of Zn(bix) [bix = 1,4-bis(imidazol-1-ylmethyl)benzene

    NASA Astrophysics Data System (ADS)

    Zhao, Xi Juan; Wang, Hui Juan; Liang, Li Jiao; Li, Yuan Fang

    2013-02-01

    Flavonoids such as naringenin and morin are ubiquitous in a wide range of foods isolated from plants, and have diverse effects on plants even on human health. Here, we establish a selective visual method for recognition of aringenin and morin based on the "switched on" fluorescence induced by a metal-organic coordination polymer of Zn(bix) [bix = 1,4-bis(imidazol-1-ylmethyl)benzene]. Owing to the coordination interaction of aringenin and morin with Zn(II) from the polymeric structure of Zn(bix), the conformational free rotation of naringenin and morin is restricted leading to relatively rigid structures. And as a consequence, the fluorescence is switched on. While luteolin and quercetin, holding a very similar structure with naringenin and morin, have no such fluorescence enhancement most likely owing to the 3'-hydroxy substitution in the B ring. Under 365 nm UV lamp light, we can visually recognize and discriminate naringenin and morin from them each other and luteolin as well as quercetin based on the colors of their emission. With this recognition system, the detection of naringenin and morin in human urine was made with satisfactory results.

  15. Gravity Cues Embedded in the Kinematics of Human Motion Are Detected in Form-from-Motion Areas of the Visual System and in Motor-Related Areas

    PubMed Central

    Cignetti, Fabien; Chabeauti, Pierre-Yves; Menant, Jasmine; Anton, Jean-Luc J. J.; Schmitz, Christina; Vaugoyeau, Marianne; Assaiante, Christine

    2017-01-01

    The present study investigated the cortical areas engaged in the perception of graviceptive information embedded in biological motion (BM). To this end, functional magnetic resonance imaging was used to assess the cortical areas active during the observation of human movements performed under normogravity and microgravity (parabolic flight). Movements were defined by motion cues alone using point-light displays. We found that gravity modulated the activation of a restricted set of regions of the network subtending BM perception, including form-from-motion areas of the visual system (kinetic occipital region, lingual gyrus, cuneus) and motor-related areas (primary motor and somatosensory cortices). These findings suggest that compliance of observed movements with normal gravity was carried out by mapping them onto the observer’s motor system and by extracting their overall form from local motion of the moving light points. We propose that judgment on graviceptive information embedded in BM can be established based on motor resonance and visual familiarity mechanisms and not necessarily by accessing the internal model of gravitational motion stored in the vestibular cortex. PMID:28861024

  16. Audiovisual Delay as a Novel Cue to Visual Distance.

    PubMed

    Jaekl, Philip; Seidlitz, Jakob; Harris, Laurence R; Tadin, Duje

    2015-01-01

    For audiovisual sensory events, sound arrives with a delay relative to light that increases with event distance. It is unknown, however, whether humans can use these ubiquitous sound delays as an information source for distance computation. Here, we tested the hypothesis that audiovisual delays can both bias and improve human perceptual distance discrimination, such that visual stimuli paired with auditory delays are perceived as more distant and are thereby an ordinal distance cue. In two experiments, participants judged the relative distance of two repetitively displayed three-dimensional dot clusters, both presented with sounds of varying delays. In the first experiment, dot clusters presented with a sound delay were judged to be more distant than dot clusters paired with equivalent sound leads. In the second experiment, we confirmed that the presence of a sound delay was sufficient to cause stimuli to appear as more distant. Additionally, we found that ecologically congruent pairing of more distant events with a sound delay resulted in an increase in the precision of distance judgments. A control experiment determined that the sound delay duration influencing these distance judgments was not detectable, thereby eliminating decision-level influence. In sum, we present evidence that audiovisual delays can be an ordinal cue to visual distance.

  17. Differential Visual Processing of Animal Images, with and without Conscious Awareness

    PubMed Central

    Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A.; Melcher, David

    2016-01-01

    The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that “invisible” stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the “unseen” condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask. PMID:27790106

  18. Differential Visual Processing of Animal Images, with and without Conscious Awareness.

    PubMed

    Zhu, Weina; Drewes, Jan; Peatfield, Nicholas A; Melcher, David

    2016-01-01

    The human visual system can quickly and efficiently extract categorical information from a complex natural scene. The rapid detection of animals in a scene is one compelling example of this phenomenon, and it suggests the automatic processing of at least some types of categories with little or no attentional requirements (Li et al., 2002, 2005). The aim of this study is to investigate whether the remarkable capability to categorize complex natural scenes exist in the absence of awareness, based on recent reports that "invisible" stimuli, which do not reach conscious awareness, can still be processed by the human visual system (Pasley et al., 2004; Williams et al., 2004; Fang and He, 2005; Jiang et al., 2006, 2007; Kaunitz et al., 2011a). In two experiments, we recorded event-related potentials (ERPs) in response to animal and non-animal/vehicle stimuli in both aware and unaware conditions in a continuous flash suppression (CFS) paradigm. Our results indicate that even in the "unseen" condition, the brain responds differently to animal and non-animal/vehicle images, consistent with rapid activation of animal-selective feature detectors prior to, or outside of, suppression by the CFS mask.

  19. Simultaneous diffuse near-infrared imaging of hemodynamic and oxygenation changes and electroencephalographic measurements of neuronal activity in the human brain

    NASA Astrophysics Data System (ADS)

    Noponen, Tommi; Kicic, Dubravko; Kotilahti, Kalle; Kajava, Timo; Kahkonen, Seppo; Nissila, Ilkka; Merilainen, Pekka; Katila, Toivo

    2005-04-01

    Visually evoked hemodynamic responses and potentials were simultaneously measured using a 16-channel optical imaging instrument and a 60-channel electroencephalography instrument during normo-, hypo- and hypercapnia from three subjects. Flashing and pattern-reversed checkerboard stimuli were used. The study protocol included two counterbalanced measurements during both normo- and hypocapnia and normo- and hypercapnia. Hypocapnia was produced by controlled hyperventilation and hypercapnia by breathing carbon dioxide enriched air. Near-infrared imaging was also used to monitor the concentration changes of oxy- and deoxyhaemoglobin due to hypo- and hypercapnia. Hemodynamic responses and evoked potentials were successfully detected for each subject above the visual cortex. The latencies of the hemodynamic responses during hypocapnia were shorter whereas during hypercapnia they were longer when compared to the latencies during normocapnia. Hypocapnia tended to decrease the latencies of visually evoked potentials compared to those during normocapnia while hypercapnia did not show any consistent effect to the potentials. The developed measurement setup and the study protocol provide the opportunity to investigate the neurovascular coupling and the links between the baseline level of blood flow, electrical activity and hemodynamic responses in the human brain.

  20. Visualization and quantitative analysis of extrachromosomal telomere-repeat DNA in individual human cells by Halo-FISH

    PubMed Central

    Komosa, Martin; Root, Heather; Meyn, M. Stephen

    2015-01-01

    Current methods for characterizing extrachromosomal nuclear DNA in mammalian cells do not permit single-cell analysis, are often semi-quantitative and frequently biased toward the detection of circular species. To overcome these limitations, we developed Halo-FISH to visualize and quantitatively analyze extrachromosomal DNA in single cells. We demonstrate Halo-FISH by using it to analyze extrachromosomal telomere-repeat (ECTR) in human cells that use the Alternative Lengthening of Telomeres (ALT) pathway(s) to maintain telomere lengths. We find that GM847 and VA13 ALT cells average ∼80 detectable G/C-strand ECTR DNA molecules/nucleus, while U2OS ALT cells average ∼18 molecules/nucleus. In comparison, human primary and telomerase-positive cells contain <5 ECTR DNA molecules/nucleus. ECTR DNA in ALT cells exhibit striking cell-to-cell variations in number (<20 to >300), range widely in length (<1 to >200 kb) and are composed of primarily G- or C-strand telomere-repeat DNA. Halo-FISH enables, for the first time, the simultaneous analysis of ECTR DNA and chromosomal telomeres in a single cell. We find that ECTR DNA comprises ∼15% of telomere-repeat DNA in GM847 and VA13 cells, but <4% in U2OS cells. In addition to its use in ALT cell analysis, Halo-FISH can facilitate the study of a wide variety of extrachromosomal DNA in mammalian cells. PMID:25662602

  1. Visual Sensitivities and Discriminations and Their Roles in Aviation.

    DTIC Science & Technology

    1986-03-01

    D. Low contrast letter charts in early diabetic retinopathy , octrlar hypertension, glaucoma and Parkinson’s disease. Br J Ophthalmol, 1984, 68, 885...to detect a camouflaged object that was visible only when moving, and compared these data with similar measurements for conventional objects that were...3) Compare visual detection (i.e. visual acquisition) of camouflaged objects whose edges are defined by velocity differences with visual detection

  2. Face Processing: Models For Recognition

    NASA Astrophysics Data System (ADS)

    Turk, Matthew A.; Pentland, Alexander P.

    1990-03-01

    The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.

  3. Operational Based Vision Assessment Cone Contrast Test: Description and Operation

    DTIC Science & Technology

    2016-06-01

    designed to detect abnormalities and characterize the contrast sensitivity of the color mechanisms of the human visual system. The OBVA CCT will...than 1, the individual is determined to have an abnormal L-M mechanism. The L-M sensitivity of mildly abnormal individuals (anomalous trichromats...response pads. This hardware is integrated with custom software that generates the stimuli, collects responses, and analyzes the results as outlined in

  4. Measuring the visual salience of alignments by their non-accidentalness.

    PubMed

    Blusseau, S; Carboni, A; Maiche, A; Morel, J M; Grompone von Gioi, R

    2016-09-01

    Quantitative approaches are part of the understanding of contour integration and the Gestalt law of good continuation. The present study introduces a new quantitative approach based on the a contrario theory, which formalizes the non-accidentalness principle for good continuation. This model yields an ideal observer algorithm, able to detect non-accidental alignments in Gabor patterns. More precisely, this parameterless algorithm associates with each candidate percept a measure, the Number of False Alarms (NFA), quantifying its degree of masking. To evaluate the approach, we compared this ideal observer with the human attentive performance on three experiments of straight contours detection in arrays of Gabor patches. The experiments showed a strong correlation between the detectability of the target stimuli and their degree of non-accidentalness, as measured by our model. What is more, the algorithm's detection curves were very similar to the ones of human subjects. This fact seems to validate our proposed measurement method as a convenient way to predict the visibility of alignments. This framework could be generalized to other Gestalts. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Creating Concepts from Converging Features in Human Cortex

    PubMed Central

    Coutanche, Marc N.; Thompson-Schill, Sharon L.

    2015-01-01

    To make sense of the world around us, our brain must remember the overlapping features of millions of objects. Crucially, it must also represent each object's unique feature-convergence. Some theories propose that an integration area (or “convergence zone”) binds together separate features. We report an investigation of our knowledge of objects' features and identity, and the link between them. We used functional magnetic resonance imaging to record neural activity, as humans attempted to detect a cued fruit or vegetable in visual noise. Crucially, we analyzed brain activity before a fruit or vegetable was present, allowing us to interrogate top-down activity. We found that pattern-classification algorithms could be used to decode the detection target's identity in the left anterior temporal lobe (ATL), its shape in lateral occipital cortex, and its color in right V4. A novel decoding-dependency analysis revealed that identity information in left ATL was specifically predicted by the temporal convergence of shape and color codes in early visual regions. People with stronger feature-and-identity dependencies had more similar top-down and bottom-up activity patterns. These results fulfill three key requirements for a neural convergence zone: a convergence result (object identity), ingredients (color and shape), and the link between them. PMID:24692512

  6. A human visual based binarization technique for histological images

    NASA Astrophysics Data System (ADS)

    Shreyas, Kamath K. M.; Rajendran, Rahul; Panetta, Karen; Agaian, Sos

    2017-05-01

    In the field of vision-based systems for object detection and classification, thresholding is a key pre-processing step. Thresholding is a well-known technique for image segmentation. Segmentation of medical images, such as Computed Axial Tomography (CAT), Magnetic Resonance Imaging (MRI), X-Ray, Phase Contrast Microscopy, and Histological images, present problems like high variability in terms of the human anatomy and variation in modalities. Recent advances made in computer-aided diagnosis of histological images help facilitate detection and classification of diseases. Since most pathology diagnosis depends on the expertise and ability of the pathologist, there is clearly a need for an automated assessment system. Histological images are stained to a specific color to differentiate each component in the tissue. Segmentation and analysis of such images is problematic, as they present high variability in terms of color and cell clusters. This paper presents an adaptive thresholding technique that aims at segmenting cell structures from Haematoxylin and Eosin stained images. The thresholded result can further be used by pathologists to perform effective diagnosis. The effectiveness of the proposed method is analyzed by visually comparing the results to the state of art thresholding methods such as Otsu, Niblack, Sauvola, Bernsen, and Wolf. Computer simulations demonstrate the efficiency of the proposed method in segmenting critical information.

  7. Testing the snake-detection hypothesis: larger early posterior negativity in humans to pictures of snakes than to pictures of other reptiles, spiders and slugs

    PubMed Central

    Van Strien, Jan W.; Franken, Ingmar H. A.; Huijding, Jorg

    2014-01-01

    According to the snake detection hypothesis (Isbell, 2006), fear specifically of snakes may have pushed evolutionary changes in the primate visual system allowing pre-attentional visual detection of fearful stimuli. A previous study demonstrated that snake pictures, when compared to spiders or bird pictures, draw more early attention as reflected by larger early posterior negativity (EPN). Here we report two studies that further tested the snake detection hypothesis. In Study 1, we tested whether the enlarged EPN is specific for snakes or also generalizes to other reptiles. Twenty-four healthy, non-phobic women watched the random rapid serial presentation of snake, crocodile, and turtle pictures. The EPN was scored as the mean activity at occipital electrodes (PO3, O1, Oz, PO4, O2) in the 225–300 ms time window after picture onset. The EPN was significantly larger for snake pictures than for pictures of the other reptiles. In Study 2, we tested whether disgust plays a role in the modulation of the EPN and whether preferential processing of snakes also can be found in men. 12 men and 12 women watched snake, spider, and slug pictures. Both men and women exhibited the largest EPN amplitudes to snake pictures, intermediate amplitudes to spider pictures and the smallest amplitudes to slug pictures. Disgust ratings were not associated with EPN amplitudes. The results replicate previous findings and suggest that ancestral priorities modulate the early capture of visual attention. PMID:25237303

  8. Feasibility Study of Inexpensive Thermal Sensors and Small Uas Deployment for Living Human Detection in Rescue Missions Application Scenarios

    NASA Astrophysics Data System (ADS)

    Levin, E.; Zarnowski, A.; McCarty, J. L.; Bialas, J.; Banaszek, A.; Banaszek, S.

    2016-06-01

    Significant efforts are invested by rescue agencies worldwide to save human lives during natural and man-made emergency situations including those that happen in wilderness locations. These emergency situations include but not limited to: accidents with alpinists, mountainous skiers, people hiking and lost in remote areas. Sometimes in a rescue operation hundreds of first responders are involved to save a single human life. There are two critical issues where geospatial imaging can be a very useful asset in rescue operations support: 1) human detection and 2) confirming a fact that detected a human being is alive. International group of researchers from the Unites States and Poland collaborated on a pilot research project devoted to identify a feasibility of use for the human detection and alive-human state confirmation small unmanned aerial vehicles (SUAVs) and inexpensive forward looking infrared (FLIR) sensors. Equipment price for both research teams was below 8,000 including 3DR quadrotor UAV and Lepton longwave infrared (LWIR) imager which costs around 250 (for the US team); DJI Inspire 1 UAS with commercial Tamarisc-320 thermal camera (for the Polish team). Specifically both collaborating groups performed independent experiments in the USA and Poland and shared imaging data of on the ground and airborne electro-optical and FLIR sensor imaging collected. In these experiments dead bodies were emulated by use of medical training dummies. Real humans were placed nearby as live human subjects. Electro-optical imagery was used for the research in optimal human detection algorithms. Furthermore, given the fact that a dead human body after several hours has a temperature of the surrounding environment our experiments were challenged by the SUAS data optimization, i.e., distance from SUAV to object so that the FLIR sensor is still capable to distinguish temperature differences between a dummy and a real human. Our experiments indicated feasibility of use SUAVs and small thermal sensors for the human detection scenarios described above. Differences in temperatures were collected by deployed imaging acquisition platform are interpretable on FLIR images visually. Moreover, we applied ENVI image processing functions for calibration and numerical estimations of such a temperature differences. There are more potential system functionalities such as voice messages from rescue teams and even distant medication delivery for the victims of described emergencies. This paper describes experiments, processing results, and future research in more details.

  9. Evaluation of three commercial rapid tests for detecting antibodies to human immunodeficiency virus.

    PubMed

    Ng, K P; Saw, T L; Baki, A; Kamarudin, R

    2003-08-01

    Determine HIV-1/2, Chembio HIV-1/2 STAT-PAK and PenTest are simple/rapid tests for the detection of antibodies to HIV-1 and HIV-2 in human whole blood, serum and plasma samples. The assay is one step and the result is read visually within 15 minutes. Using 92 known HIV-1 reactive sera and 108 known HIV-1 negative sera, the 3 HIV tests correctly identified all the known HIV-1 reactive and negative samples. The results indicated that Determine HIV-1/2, Chembio HIV-1/2 STAT-PAK and PenTest HIV are as sensitive and specific (100% concordance) as Microparticle Enzyme Immunoassay. The data indicated that these 3 HIV tests are effective testing systems for diagnosis of HIV infection in a situation when the conventional Enzyme Immunoassay is not suitable.

  10. Multispectral THz-VIS passive imaging system for hidden threats visualization

    NASA Astrophysics Data System (ADS)

    Kowalski, Marcin; Palka, Norbert; Szustakowski, Mieczyslaw

    2013-10-01

    Terahertz imaging, is the latest entry into the crowded field of imaging technologies. Many applications are emerging for the relatively new technology. THz radiation penetrates deep into nonpolar and nonmetallic materials such as paper, plastic, clothes, wood, and ceramics that are usually opaque at optical wavelengths. The T-rays have large potential in the field of hidden objects detection because it is not harmful to humans. The main difficulty in the THz imaging systems is low image quality thus it is justified to combine THz images with the high-resolution images from a visible camera. An imaging system is usually composed of various subsystems. Many of the imaging systems use imaging devices working in various spectral ranges. Our goal is to build a system harmless to humans for screening and detection of hidden objects using a THz and VIS cameras.

  11. The sequence of cortical activity inferred by response latency variability in the human ventral pathway of face processing.

    PubMed

    Lin, Jo-Fu Lotus; Silva-Pereyra, Juan; Chou, Chih-Che; Lin, Fa-Hsuan

    2018-04-11

    Variability in neuronal response latency has been typically considered caused by random noise. Previous studies of single cells and large neuronal populations have shown that the temporal variability tends to increase along the visual pathway. Inspired by these previous studies, we hypothesized that functional areas at later stages in the visual pathway of face processing would have larger variability in the response latency. To test this hypothesis, we used magnetoencephalographic data collected when subjects were presented with images of human faces. Faces are known to elicit a sequence of activity from the primary visual cortex to the fusiform gyrus. Our results revealed that the fusiform gyrus showed larger variability in the response latency compared to the calcarine fissure. Dynamic and spectral analyses of the latency variability indicated that the response latency in the fusiform gyrus was more variable than in the calcarine fissure between 70 ms and 200 ms after the stimulus onset and between 4 Hz and 40 Hz, respectively. The sequential processing of face information from the calcarine sulcus to the fusiform sulcus was more reliably detected based on sizes of the response variability than instants of the maximal response peaks. With two areas in the ventral visual pathway, we show that the variability in response latency across brain areas can be used to infer the sequence of cortical activity.

  12. Mass Spectrometry-Based Visualization of Molecules Associated with Human Habitats.

    PubMed

    Petras, Daniel; Nothias, Louis-Félix; Quinn, Robert A; Alexandrov, Theodore; Bandeira, Nuno; Bouslimani, Amina; Castro-Falcón, Gabriel; Chen, Liangyu; Dang, Tam; Floros, Dimitrios J; Hook, Vivian; Garg, Neha; Hoffner, Nicole; Jiang, Yike; Kapono, Clifford A; Koester, Irina; Knight, Rob; Leber, Christopher A; Ling, Tie-Jun; Luzzatto-Knaan, Tal; McCall, Laura-Isobel; McGrath, Aaron P; Meehan, Michael J; Merritt, Jonathan K; Mills, Robert H; Morton, Jamie; Podvin, Sonia; Protsyuk, Ivan; Purdy, Trevor; Satterfield, Kendall; Searles, Stephen; Shah, Sahil; Shires, Sarah; Steffen, Dana; White, Margot; Todoric, Jelena; Tuttle, Robert; Wojnicz, Aneta; Sapp, Valerie; Vargas, Fernando; Yang, Jin; Zhang, Chao; Dorrestein, Pieter C

    2016-11-15

    The cars we drive, the homes we live in, the restaurants we visit, and the laboratories and offices we work in are all a part of the modern human habitat. Remarkably, little is known about the diversity of chemicals present in these environments and to what degree molecules from our bodies influence the built environment that surrounds us and vice versa. We therefore set out to visualize the chemical diversity of five built human habitats together with their occupants, to provide a snapshot of the various molecules to which humans are exposed on a daily basis. The molecular inventory was obtained through untargeted liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis of samples from each human habitat and from the people that occupy those habitats. Mapping MS-derived data onto 3D models of the environments showed that frequently touched surfaces, such as handles (e.g., door, bicycle), resemble the molecular fingerprint of the human skin more closely than other surfaces that are less frequently in direct contact with humans (e.g., wall, bicycle frame). Approximately 50% of the MS/MS spectra detected were shared between people and the environment. Personal care products, plasticizers, cleaning supplies, food, food additives, and even medications that were found to be a part of the human habitat. The annotations indicate that significant transfer of chemicals takes place between us and our built environment. The workflows applied here will lay the foundation for future studies of molecular distributions in medical, forensic, architectural, space exploration, and environmental applications.

  13. Human Visual System as a Double-Slit Single Photon Interference Sensor: A Comparison between Modellistic and Biophysical Tests

    PubMed Central

    Pizzi, Rita; Wang, Rui; Rossetti, Danilo

    2016-01-01

    This paper describes a computational approach to the theoretical problems involved in the Young's single-photon double-slit experiment, focusing on a simulation of this experiment in the absence of measuring devices. Specifically, the human visual system is used in place of a photomultiplier or similar apparatus. Beginning with the assumption that the human eye perceives light in the presence of very few photons, we measure human eye performance as a sensor in a double-slit one-photon-at-a-time experimental setup. To interpret the results, we implement a simulation algorithm and compare its results with those of human subjects under identical experimental conditions. In order to evaluate the perceptive parameters exactly, which vary depending on the light conditions and on the subject’s sensitivity, we first review the existing literature on the biophysics of the human eye in the presence of a dim light source, and then use the known values of the experimental variables to set the parameters of the computational simulation. The results of the simulation and their comparison with the experiment involving human subjects are reported and discussed. It is found that, while the computer simulation indicates that the human eye has the capacity to detect the corpuscular nature of photons under these conditions, this was not observed in practice. The possible reasons for the difference between theoretical prediction and experimental results are discussed. PMID:26816029

  14. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence

    PubMed Central

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude

    2016-01-01

    The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain. PMID:27282108

  15. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence.

    PubMed

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Torralba, Antonio; Oliva, Aude

    2016-06-10

    The complex multi-stage architecture of cortical visual pathways provides the neural basis for efficient visual object recognition in humans. However, the stage-wise computations therein remain poorly understood. Here, we compared temporal (magnetoencephalography) and spatial (functional MRI) visual brain representations with representations in an artificial deep neural network (DNN) tuned to the statistics of real-world visual recognition. We showed that the DNN captured the stages of human visual processing in both time and space from early visual areas towards the dorsal and ventral streams. Further investigation of crucial DNN parameters revealed that while model architecture was important, training on real-world categorization was necessary to enforce spatio-temporal hierarchical relationships with the brain. Together our results provide an algorithmically informed view on the spatio-temporal dynamics of visual object recognition in the human visual brain.

  16. Oral-Fluid Thiol-Detection Test Identifies Underlying Active Periodontal Disease Not Detected by the Visual Awake Examination.

    PubMed

    Queck, Katherine E; Chapman, Angela; Herzog, Leslie J; Shell-Martin, Tamara; Burgess-Cassler, Anthony; McClure, George David

    Periodontal disease in dogs is highly prevalent but can only be accurately diagnosed by performing an anesthetized oral examination with periodontal probing and dental radiography. In this study, 114 dogs had a visual awake examination of the oral cavity and were administered an oral-fluid thiol-detection test prior to undergoing a a full-mouth anesthetized oral examination and digital dental radiographs. The results show the visual awake examination underestimated the presence and severity of active periodontal disease. The thiol-detection test was superior to the visual awake examination at detecting the presence and severity of active periodontal disease and was an indicator of progression toward alveolar bone loss. The thiol-detection test detected active periodontal disease at early stages of development, before any visual cues were present, indicating the need for intervention to prevent periodontal bone loss. Early detection is important because without intervention, dogs with gingivitis (active periodontal disease) progress to irreversible periodontal bone loss (stage 2+). As suggested in the current AAHA guidelines, a thiol-detection test administered in conjunction with the visual awake examination during routine wellness examinations facilitates veterinarian-client communication and mitigates under-diagnosis of periodontal disease and underutilization of dental services. The thiol-detection test can be used to monitor the periodontal health status of the conscious patient during follow-up examinations based on disease severity.

  17. Visual analysis of geocoded twin data puts nature and nurture on the map.

    PubMed

    Davis, O S P; Haworth, C M A; Lewis, C M; Plomin, R

    2012-09-01

    Twin studies allow us to estimate the relative contributions of nature and nurture to human phenotypes by comparing the resemblance of identical and fraternal twins. Variation in complex traits is a balance of genetic and environmental influences; these influences are typically estimated at a population level. However, what if the balance of nature and nurture varies depending on where we grow up? Here we use statistical and visual analysis of geocoded data from over 6700 families to show that genetic and environmental contributions to 45 childhood cognitive and behavioral phenotypes vary geographically in the United Kingdom. This has implications for detecting environmental exposures that may interact with the genetic influences on complex traits, and for the statistical power of samples recruited for genetic association studies. More broadly, our experience demonstrates the potential for collaborative exploratory visualization to act as a lingua franca for large-scale interdisciplinary research.

  18. Long-term music training modulates the recalibration of audiovisual simultaneity.

    PubMed

    Jicol, Crescent; Proulx, Michael J; Pollick, Frank E; Petrini, Karin

    2018-07-01

    To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.

  19. Tracking the allocation of attention using human pupillary oscillations

    PubMed Central

    Naber, Marnix; Alvarez, George A.; Nakayama, Ken

    2013-01-01

    The muscles that control the pupil are richly innervated by the autonomic nervous system. While there are central pathways that drive pupil dilations in relation to arousal, there is no anatomical evidence that cortical centers involved with visual selective attention innervate the pupil. In this study, we show that such connections must exist. Specifically, we demonstrate a novel Pupil Frequency Tagging (PFT) method, where oscillatory changes in stimulus brightness over time are mirrored by pupil constrictions and dilations. We find that the luminance–induced pupil oscillations are enhanced when covert attention is directed to the flicker stimulus and when targets are correctly detected in an attentional tracking task. These results suggest that the amplitudes of pupil responses closely follow the allocation of focal visual attention and the encoding of stimuli. PFT provides a new opportunity to study top–down visual attention itself as well as identifying the pathways and mechanisms that support this unexpected phenomenon. PMID:24368904

  20. Ingestible roasted barley for contrast-enhanced photoacoustic imaging in animal and human subjects.

    PubMed

    Wang, Depeng; Lee, Dong Hyeun; Huang, Haoyuan; Vu, Tri; Lim, Rachel Su Ann; Nyayapathi, Nikhila; Chitgupi, Upendra; Liu, Maggie; Geng, Jumin; Xia, Jun; Lovell, Jonathan F

    2018-08-01

    Photoacoustic computed tomography (PACT) is an emerging imaging modality. While many contrast agents have been developed for PACT, these typically cannot immediately be used in humans due to the lengthy regulatory process. We screened two hundred types of ingestible foodstuff samples for photoacoustic contrast with 1064 nm pulse laser excitation, and identified roasted barley as a promising candidate. Twenty brands of roasted barley were further screened to identify the one with the strongest contrast, presumably based on complex chemical modifications incurred during the roasting process. Individual roasted barley particles could be detected through 3.5 cm of chicken-breast tissue and through the whole hand of healthy human volunteers. With PACT, but not ultrasound imaging, a single grain of roasted barley was detected in a field of hundreds of non-roasted particles. Upon oral administration, roasted barley enabled imaging of the gut and peristalsis in mice. Prepared roasted barley tea could be detected through 2.5 cm chicken breast tissue. When barley tea was administered to humans, photoacoustic imaging visualized swallowing dynamics in healthy volunteers. Thus, roasted barley represents an edible foodstuff that should be considered for photoacoustic contrast imaging of swallowing and gut processes, with immediate potential for clinical translation. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. In vivo near-infrared dual-axis confocal microendoscopy in the human lower gastrointestinal tract

    NASA Astrophysics Data System (ADS)

    Piyawattanametha, Wibool; Ra, Hyejun; Qiu, Zhen; Friedland, Shai; Liu, Jonathan T. C.; Loewke, Kevin; Kino, Gordon S.; Solgaard, Olav; Wang, Thomas D.; Mandella, Michael J.; Contag, Christopher H.

    2012-02-01

    Near-infrared confocal microendoscopy is a promising technique for deep in vivo imaging of tissues and can generate high-resolution cross-sectional images at the micron-scale. We demonstrate the use of a dual-axis confocal (DAC) near-infrared fluorescence microendoscope with a 5.5-mm outer diameter for obtaining clinical images of human colorectal mucosa. High-speed two-dimensional en face scanning was achieved through a microelectromechanical systems (MEMS) scanner while a micromotor was used for adjusting the axial focus. In vivo images of human patients are collected at 5 frames/sec with a field of view of 362×212 μm2 and a maximum imaging depth of 140 μm. During routine endoscopy, indocyanine green (ICG) was topically applied a nonspecific optical contrasting agent to regions of the human colon. The DAC microendoscope was then used to obtain microanatomic images of the mucosa by detecting near-infrared fluorescence from ICG. These results suggest that DAC microendoscopy may have utility for visualizing the anatomical and, perhaps, functional changes associated with colorectal pathology for the early detection of colorectal cancer.

  2. In vivo near-infrared dual-axis confocal microendoscopy in the human lower gastrointestinal tract.

    PubMed

    Piyawattanametha, Wibool; Ra, Hyejun; Qiu, Zhen; Friedland, Shai; Liu, Jonathan T C; Loewke, Kevin; Kino, Gordon S; Solgaard, Olav; Wang, Thomas D; Mandella, Michael J; Contag, Christopher H

    2012-02-01

    Near-infrared confocal microendoscopy is a promising technique for deep in vivo imaging of tissues and can generate high-resolution cross-sectional images at the micron-scale. We demonstrate the use of a dual-axis confocal (DAC) near-infrared fluorescence microendoscope with a 5.5-mm outer diameter for obtaining clinical images of human colorectal mucosa. High-speed two-dimensional en face scanning was achieved through a microelectromechanical systems (MEMS) scanner while a micromotor was used for adjusting the axial focus. In vivo images of human patients are collected at 5 frames/sec with a field of view of 362×212 μm(2) and a maximum imaging depth of 140 μm. During routine endoscopy, indocyanine green (ICG) was topically applied a nonspecific optical contrasting agent to regions of the human colon. The DAC microendoscope was then used to obtain microanatomic images of the mucosa by detecting near-infrared fluorescence from ICG. These results suggest that DAC microendoscopy may have utility for visualizing the anatomical and, perhaps, functional changes associated with colorectal pathology for the early detection of colorectal cancer.

  3. Genome dynamics of the human embryonic kidney 293 lineage in response to cell biology manipulations.

    PubMed

    Lin, Yao-Cheng; Boone, Morgane; Meuris, Leander; Lemmens, Irma; Van Roy, Nadine; Soete, Arne; Reumers, Joke; Moisse, Matthieu; Plaisance, Stéphane; Drmanac, Radoje; Chen, Jason; Speleman, Frank; Lambrechts, Diether; Van de Peer, Yves; Tavernier, Jan; Callewaert, Nico

    2014-09-03

    The HEK293 human cell lineage is widely used in cell biology and biotechnology. Here we use whole-genome resequencing of six 293 cell lines to study the dynamics of this aneuploid genome in response to the manipulations used to generate common 293 cell derivatives, such as transformation and stable clone generation (293T); suspension growth adaptation (293S); and cytotoxic lectin selection (293SG). Remarkably, we observe that copy number alteration detection could identify the genomic region that enabled cell survival under selective conditions (i.c. ricin selection). Furthermore, we present methods to detect human/vector genome breakpoints and a user-friendly visualization tool for the 293 genome data. We also establish that the genome structure composition is in steady state for most of these cell lines when standard cell culturing conditions are used. This resource enables novel and more informed studies with 293 cells, and we will distribute the sequenced cell lines to this effect.

  4. Immunofluorescence Analysis of Endogenous and Exogenous Centromere-kinetochore Proteins

    PubMed Central

    Niikura, Yohei; Kitagawa, Katsumi

    2016-01-01

    "Centromeres" and "kinetochores" refer to the site where chromosomes associate with the spindle during cell division. Direct visualization of centromere-kinetochore proteins during the cell cycle remains a fundamental tool in investigating the mechanism(s) of these proteins. Advanced imaging methods in fluorescence microscopy provide remarkable resolution of centromere-kinetochore components and allow direct observation of specific molecular components of the centromeres and kinetochores. In addition, methods of indirect immunofluorescent (IIF) staining using specific antibodies are crucial to these observations. However, despite numerous reports about IIF protocols, few discussed in detail problems of specific centromere-kinetochore proteins.1-4 Here we report optimized protocols to stain endogenous centromere-kinetochore proteins in human cells by using paraformaldehyde fixation and IIF staining. Furthermore, we report protocols to detect Flag-tagged exogenous CENP-A proteins in human cells subjected to acetone or methanol fixation. These methods are useful in detecting and quantifying endogenous centromere-kinetochore proteins and Flag-tagged CENP-A proteins, including those in human cells. PMID:26967065

  5. 1.56 Terahertz 2-frames per second standoff imaging

    NASA Astrophysics Data System (ADS)

    Goyette, Thomas M.; Dickinson, Jason C.; Linden, Kurt J.; Neal, William R.; Joseph, Cecil S.; Gorveatt, William J.; Waldman, Jerry; Giles, Robert; Nixon, William E.

    2008-02-01

    A Terahertz imaging system intended to demonstrate identification of objects concealed under clothing was designed, assembled, and tested. The system design was based on a 2.5 m standoff distance, with a capability of visualizing a 0.5 m by 0.5 m scene at an image rate of 2 frames per second. The system optical design consisted of a 1.56 THz laser beam, which was raster swept by a dual torsion mirror scanner. The beam was focused onto the scan subject by a stationary 50 cm-diameter focusing mirror. A heterodyne detection technique was used to down convert the backscattered signal. The system demonstrated a 1.5 cm spot resolution. Human subjects were scanned at a frame rate of 2 frames per second. Hidden metal objects were detected under a jacket worn by the human subject. A movie including data and video images was produced in 1.5 minutes scanning a human through 180° of azimuth angle at 0.7° increment.

  6. In-vivo Imaging of Magnetic Fields Induced by Transcranial Direct Current Stimulation (tDCS) in Human Brain using MRI

    NASA Astrophysics Data System (ADS)

    Jog, Mayank V.; Smith, Robert X.; Jann, Kay; Dunn, Walter; Lafon, Belen; Truong, Dennis; Wu, Allan; Parra, Lucas; Bikson, Marom; Wang, Danny J. J.

    2016-10-01

    Transcranial direct current stimulation (tDCS) is an emerging non-invasive neuromodulation technique that applies mA currents at the scalp to modulate cortical excitability. Here, we present a novel magnetic resonance imaging (MRI) technique, which detects magnetic fields induced by tDCS currents. This technique is based on Ampere’s law and exploits the linear relationship between direct current and induced magnetic fields. Following validation on a phantom with a known path of electric current and induced magnetic field, the proposed MRI technique was applied to a human limb (to demonstrate in-vivo feasibility using simple biological tissue) and human heads (to demonstrate feasibility in standard tDCS applications). The results show that the proposed technique detects tDCS induced magnetic fields as small as a nanotesla at millimeter spatial resolution. Through measurements of magnetic fields linearly proportional to the applied tDCS current, our approach opens a new avenue for direct in-vivo visualization of tDCS target engagement.

  7. [Non-destructive, preclinical evaluation of root canal anatomy of human teeth with flat-panel detector volume CT (FD-VCT)].

    PubMed

    Heidrich, G; Hassepass, F; Dullin, C; Attin, T; Grabbe, E; Hannig, C

    2005-12-01

    Successful endodontic diagnostics and therapy call for adequate depiction of the root canal anatomy with multimodal diagnostic imaging. The aim of the present study is to evaluate visualization of the endodont with flat-panel detector volume CT (FD-VCT). 13 human teeth were examined with the prototype of a FD-VCT. After data acquisition and generation of volume data sets in volume rendering technology (VRT), the findings obtained were compared to conventional X-rays and cross-section preparations of the teeth. The anatomical structures of the endodont such as root canals, side canals and communications between different root canals as well as denticles could be detected precisely with FD-VCT. The length of curved root canals was also determined accurately. The spatial resolution of the system is around 140 microm. Only around 73 % of the main root canals detected with FD-VCT and 87 % of the roots could be visualized with conventional dental X-rays. None of the side canals, shown with FD-VCT, was detectable on conventional X-rays. In all cases the enamel and dentin of the teeth could be well delineated. No differences in image quality could be discerned between stored and freshly extracted teeth, or between primary and adult teeth. FD-VCT is an innovative diagnostic modality in preclinical and experimental use for non-destructive three-dimensional analysis of teeth. Thanks to the high isotropic spatial resolution compared with conventional X-rays, even the minutest structures, such as side canals, can be detected and evaluated. Potential applications in endodontics include diagnostics and evaluation of all steps of root canal treatment, ranging from trepanation through determination of the length of the root canal to obturation.

  8. Assisting Movement Training and Execution With Visual and Haptic Feedback.

    PubMed

    Ewerton, Marco; Rother, David; Weimar, Jakob; Kollegger, Gerrit; Wiemeyer, Josef; Peters, Jan; Maeda, Guilherme

    2018-01-01

    In the practice of motor skills in general, errors in the execution of movements may go unnoticed when a human instructor is not available. In this case, a computer system or robotic device able to detect movement errors and propose corrections would be of great help. This paper addresses the problem of how to detect such execution errors and how to provide feedback to the human to correct his/her motor skill using a general, principled methodology based on imitation learning. The core idea is to compare the observed skill with a probabilistic model learned from expert demonstrations. The intensity of the feedback is regulated by the likelihood of the model given the observed skill. Based on demonstrations, our system can, for example, detect errors in the writing of characters with multiple strokes. Moreover, by using a haptic device, the Haption Virtuose 6D, we demonstrate a method to generate haptic feedback based on a distribution over trajectories, which could be used as an auxiliary means of communication between an instructor and an apprentice. Additionally, given a performance measurement, the haptic device can help the human discover and perform better movements to solve a given task. In this case, the human first tries a few times to solve the task without assistance. Our framework, in turn, uses a reinforcement learning algorithm to compute haptic feedback, which guides the human toward better solutions.

  9. Experience, Context, and the Visual Perception of Human Movement

    ERIC Educational Resources Information Center

    Jacobs, Alissa; Pinto, Jeannine; Shiffrar, Maggie

    2004-01-01

    Why are human observers particularly sensitive to human movement? Seven experiments examined the roles of visual experience and motor processes in human movement perception by comparing visual sensitivities to point-light displays of familiar, unusual, and impossible gaits across gait-speed and identity discrimination tasks. In both tasks, visual…

  10. Evidence for Non-Opponent Coding of Colour Information in Human Visual Cortex: Selective Loss of “Green” Sensitivity in a Subject with Damaged Ventral Occipito-Temporal Cortex

    PubMed Central

    Rauscher, Franziska G.; Plant, Gordon T.; James-Galton, Merle; Barbur, John L.

    2011-01-01

    Damage to ventral occipito-temporal extrastriate visual cortex leads to the syndrome of prosopagnosia often with coexisting cerebral achromatopsia. A patient with this syndrome resulting in a left upper homonymous quadrantanopia, prosopagnosia, and incomplete achromatopsia is described. Chromatic sensitivity was assessed at a number of locations in the intact visual field using a dynamic luminance contrast masking technique that isolates the use of colour signals. In normal subjects chromatic detection thresholds form an elliptical contour when plotted in the Commission Internationale d’Eclairage, (x-y), chromaticity diagram. Because the extraction of colour signals in early visual processing involves opponent mechanisms, subjects with Daltonism (congenital red/green loss of sensitivity) show symmetric increase in thresholds towards the long wavelength (“red”) and middle wavelength (“green”) regions of the spectrum locus. This is also the case with acquired loss of chromatic sensitivity as a result of retinal or optic nerve disease. Our patient’s results were an exception to this rule. Whilst his chromatic sensitivity in the central region of the visual field was reduced symmetrically for both “red/green” and “yellow/blue” directions in colour space, the subject’s lower left quadrant showed a marked asymmetry in “red/green” thresholds with the greatest loss of sensitivity towards the “green” region of the spectrum locus. This spatially localized asymmetric loss of “green” but not “red” sensitivity has not been reported previously in human vision. Such loss is consistent with selective damage of neural substrates in the visual cortex that process colour information, but are spectrally non-opponent. PMID:27956924

  11. Mapping visual cortex in monkeys and humans using surface-based atlases

    NASA Technical Reports Server (NTRS)

    Van Essen, D. C.; Lewis, J. W.; Drury, H. A.; Hadjikhani, N.; Tootell, R. B.; Bakircioglu, M.; Miller, M. I.

    2001-01-01

    We have used surface-based atlases of the cerebral cortex to analyze the functional organization of visual cortex in humans and macaque monkeys. The macaque atlas contains multiple partitioning schemes for visual cortex, including a probabilistic atlas of visual areas derived from a recent architectonic study, plus summary schemes that reflect a combination of physiological and anatomical evidence. The human atlas includes a probabilistic map of eight topographically organized visual areas recently mapped using functional MRI. To facilitate comparisons between species, we used surface-based warping to bring functional and geographic landmarks on the macaque map into register with corresponding landmarks on the human map. The results suggest that extrastriate visual cortex outside the known topographically organized areas is dramatically expanded in human compared to macaque cortex, particularly in the parietal lobe.

  12. Simulating Visual Attention Allocation of Pilots in an Advanced Cockpit Environment

    NASA Technical Reports Server (NTRS)

    Frische, F.; Osterloh, J.-P.; Luedtke, A.

    2011-01-01

    This paper describes the results of experiments conducted with human line pilots and a cognitive pilot model during interaction with a new 40 Flight Management System (FMS). The aim of these experiments was to gather human pilot behavior data in order to calibrate the behavior of the model. Human behavior is mainly triggered by visual perception. Thus, the main aspect was to setup a profile of human pilots' visual attention allocation in a cockpit environment containing the new FMS. We first performed statistical analyses of eye tracker data and then compared our results to common results of familiar analyses in standard cockpit environments. The comparison has shown a significant influence of the new system on the visual performance of human pilots. Further on, analyses of the pilot models' visual performance have been performed. A comparison to human pilots' visual performance revealed important improvement potentials.

  13. Development of a one-step immunochromatographic strip test using gold nanoparticles for the rapid detection of Salmonella typhi in human serum.

    PubMed

    Preechakasedkit, Pattarachaya; Pinwattana, Kulwadee; Dungchai, Wijitar; Siangproh, Weena; Chaicumpa, Wanpen; Tongtawe, Pongsri; Chailapakul, Orawon

    2012-01-15

    An immunochromatographic strip test using gold nanoparticles was developed for the rapid detection of Salmonella typhi (S. typhi) in human serum. The strip test based on the principle of sandwich immunoassay by the specific binding of antigens from S. typhi O901 and antibody of S. typhi O901 on a nitrocellulose membrane. Antibody-gold nanoparticle conjugate was used as the label and was coated onto a glass fiber membrane, which was used as a conjugate pad. To create a test and control zone, antibody of S. typhi O901 and an anti-IgG were dotted on the nitrocellulose membrane, respectively. Positive samples were displayed as red dots at the test and control zones of the nitrocellulose membrane, while negative samples resulted in a red dot only in the control zone. The limit of detection (LOD) was found to be 1.14×10(5) cfu mL(-1), which could be visually detected by the naked eye within 15 min. This strip test provided a lower detection limit and analysis time than a dot blot immunoassay (8.88×10(6) cfu mL(-1) for LOD and 110 min for reaction time). In addition, our immunochromatographic strip test was employed to detect S. typhi in human serum effectively, with high accuracy. This strip test offers great promise for a rapid, simple and low-cost analysis of S. typhi. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. The effect of saccade metrics on the corollary discharge contribution to perceived eye location

    PubMed Central

    Bansal, Sonia; Jayet Bray, Laurence C.; Peterson, Matthew S.

    2015-01-01

    Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics. PMID:25761955

  15. Visual search in Alzheimer's disease: a deficiency in processing conjunctions of features.

    PubMed

    Tales, A; Butler, S R; Fossey, J; Gilchrist, I D; Jones, R W; Troscianko, T

    2002-01-01

    Human vision often needs to encode multiple characteristics of many elements of the visual field, for example their lightness and orientation. The paradigm of visual search allows a quantitative assessment of the function of the underlying mechanisms. It measures the ability to detect a target element among a set of distractor elements. We asked whether Alzheimer's disease (AD) patients are particularly affected in one type of search, where the target is defined by a conjunction of features (orientation and lightness) and where performance depends on some shifting of attention. Two non-conjunction control conditions were employed. The first was a pre-attentive, single-feature, "pop-out" task, detecting a vertical target among horizontal distractors. The second was a single-feature, partly attentive task in which the target element was slightly larger than the distractors-a "size" task. This was chosen to have a similar level of attentional load as the conjunction task (for the control group), but lacked the conjunction of two features. In an experiment, 15 AD patients were compared to age-matched controls. The results suggested that AD patients have a particular impairment in the conjunction task but not in the single-feature size or pre-attentive tasks. This may imply that AD particularly affects those mechanisms which compare across more than one feature type, and spares the other systems and is not therefore simply an 'attention-related' impairment. Additionally, these findings show a double dissociation with previous data on visual search in Parkinson's disease (PD), suggesting a different effect of these diseases on the visual pathway.

  16. Reshaping the brain after stroke: The effect of prismatic adaptation in patients with right brain damage.

    PubMed

    Crottaz-Herbette, Sonia; Fornari, Eleonora; Notter, Michael P; Bindschaedler, Claire; Manzoni, Laura; Clarke, Stephanie

    2017-09-01

    Prismatic adaptation has been repeatedly reported to alleviate neglect symptoms; in normal subjects, it was shown to enhance the representation of the left visual space within the left inferior parietal cortex. Our study aimed to determine in humans whether similar compensatory mechanisms underlie the beneficial effect of prismatic adaptation in neglect. Fifteen patients with right hemispheric lesions and 11 age-matched controls underwent a prismatic adaptation session which was preceded and followed by fMRI using a visual detection task. In patients, the prismatic adaptation session improved the accuracy of target detection in the left and central space and enhanced the representation of this visual space within the left hemisphere in parts of the temporal convexity, inferior parietal lobule and prefrontal cortex. Across patients, the increase in neuronal activation within the temporal regions correlated with performance improvements in this visual space. In control subjects, prismatic adaptation enhanced the representation of the left visual space within the left inferior parietal lobule and decreased it within the left temporal cortex. Thus, a brief exposure to prismatic adaptation enhances, both in patients and in control subjects, the competence of the left hemisphere for the left space, but the regions extended beyond the inferior parietal lobule to the temporal convexity in patients. These results suggest that the left hemisphere provides compensatory mechanisms in neglect by assuming the representation of the whole space within the ventral attentional system. The rapidity of the change suggests that the underlying mechanism relies on uncovering pre-existing synaptic connections. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Visual short-term memory load reduces retinotopic cortex response to contrast.

    PubMed

    Konstantinou, Nikos; Bahrami, Bahador; Rees, Geraint; Lavie, Nilli

    2012-11-01

    Load Theory of attention suggests that high perceptual load in a task leads to reduced sensory visual cortex response to task-unrelated stimuli resulting in "load-induced blindness" [e.g., Lavie, N. Attention, distraction and cognitive control under load. Current Directions in Psychological Science, 19, 143-148, 2010; Lavie, N. Distracted and confused?: Selective attention under load. Trends in Cognitive Sciences, 9, 75-82, 2005]. Consideration of the findings that visual STM (VSTM) involves sensory recruitment [e.g., Pasternak, T., & Greenlee, M. Working memory in primate sensory systems. Nature Reviews Neuroscience, 6, 97-107, 2005] within Load Theory led us to a new hypothesis regarding the effects of VSTM load on visual processing. If VSTM load draws on sensory visual capacity, then similar to perceptual load, high VSTM load should also reduce visual cortex response to incoming stimuli leading to a failure to detect them. We tested this hypothesis with fMRI and behavioral measures of visual detection sensitivity. Participants detected the presence of a contrast increment during the maintenance delay in a VSTM task requiring maintenance of color and position. Increased VSTM load (manipulated by increased set size) led to reduced retinotopic visual cortex (V1-V3) responses to contrast as well as reduced detection sensitivity, as we predicted. Additional visual detection experiments established a clear tradeoff between the amount of information maintained in VSTM and detection sensitivity, while ruling out alternative accounts for the effects of VSTM load in terms of differential spatial allocation strategies or task difficulty. These findings extend Load Theory to demonstrate a new form of competitive interactions between early visual cortex processing and visual representations held in memory under load and provide a novel line of support for the sensory recruitment hypothesis of VSTM.

  18. Blind source separation in retinal videos

    NASA Astrophysics Data System (ADS)

    Barriga, Eduardo S.; Truitt, Paul W.; Pattichis, Marios S.; Tüso, Dan; Kwon, Young H.; Kardon, Randy H.; Soliz, Peter

    2003-05-01

    An optical imaging device of retina function (OID-RF) has been developed to measure changes in blood oxygen saturation due to neural activity resulting from visual stimulation of the photoreceptors in the human retina. The video data that are collected represent a mixture of the functional signal in response to the retinal activation and other signals from undetermined physiological activity. Measured changes in reflectance in response to the visual stimulus are on the order of 0.1% to 1.0% of the total reflected intensity level which makes the functional signal difficult to detect by standard methods since it is masked by the other signals that are present. In this paper, we apply principal component analysis (PCA), blind source separation (BSS), using Extended Spatial Decorrelation (ESD) and independent component analysis (ICA) using the Fast-ICA algorithm to extract the functional signal from the retinal videos. The results revealed that the functional signal in a stimulated retina can be detected through the application of some of these techniques.

  19. Multiscale infrared and visible image fusion using gradient domain guided image filtering

    NASA Astrophysics Data System (ADS)

    Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia

    2018-03-01

    For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.

  20. Effects of visual attention on chromatic and achromatic detection sensitivities.

    PubMed

    Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko

    2014-05-01

    Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.

  1. Alpha-beta and gamma rhythms subserve feedback and feedforward influences among human visual cortical areas

    PubMed Central

    Michalareas, Georgios; Vezoli, Julien; van Pelt, Stan; Schoffelen, Jan-Mathijs; Kennedy, Henry; Fries, Pascal

    2016-01-01

    Primate visual cortex is hierarchically organized. Bottom-up and top-down influences are exerted through distinct frequency channels, as was recently revealed in macaques by correlating inter-areal influences with laminar anatomical projection patterns. Because this anatomical data cannot be obtained in human subjects, we selected seven homologous macaque and human visual areas, and correlated the macaque laminar projection patterns to human inter-areal directed influences as measured with magnetoencephalography. We show that influences along feedforward projections predominate in the gamma band, whereas influences along feedback projections predominate in the alpha-beta band. Rhythmic inter-areal influences constrain a functional hierarchy of the seven homologous human visual areas that is in close agreement with the respective macaque anatomical hierarchy. Rhythmic influences allow an extension of the hierarchy to 26 human visual areas including uniquely human brain areas. Hierarchical levels of ventral and dorsal stream visual areas are differentially affected by inter-areal influences in the alpha-beta band. PMID:26777277

  2. IntNetDB v1.0: an integrated protein-protein interaction network database generated by a probabilistic model

    PubMed Central

    Xia, Kai; Dong, Dong; Han, Jing-Dong J

    2006-01-01

    Background Although protein-protein interaction (PPI) networks have been explored by various experimental methods, the maps so built are still limited in coverage and accuracy. To further expand the PPI network and to extract more accurate information from existing maps, studies have been carried out to integrate various types of functional relationship data. A frequently updated database of computationally analyzed potential PPIs to provide biological researchers with rapid and easy access to analyze original data as a biological network is still lacking. Results By applying a probabilistic model, we integrated 27 heterogeneous genomic, proteomic and functional annotation datasets to predict PPI networks in human. In addition to previously studied data types, we show that phenotypic distances and genetic interactions can also be integrated to predict PPIs. We further built an easy-to-use, updatable integrated PPI database, the Integrated Network Database (IntNetDB) online, to provide automatic prediction and visualization of PPI network among genes of interest. The networks can be visualized in SVG (Scalable Vector Graphics) format for zooming in or out. IntNetDB also provides a tool to extract topologically highly connected network neighborhoods from a specific network for further exploration and research. Using the MCODE (Molecular Complex Detections) algorithm, 190 such neighborhoods were detected among all the predicted interactions. The predicted PPIs can also be mapped to worm, fly and mouse interologs. Conclusion IntNetDB includes 180,010 predicted protein-protein interactions among 9,901 human proteins and represents a useful resource for the research community. Our study has increased prediction coverage by five-fold. IntNetDB also provides easy-to-use network visualization and analysis tools that allow biological researchers unfamiliar with computational biology to access and analyze data over the internet. The web interface of IntNetDB is freely accessible at . Visualization requires Mozilla version 1.8 (or higher) or Internet Explorer with installation of SVGviewer. PMID:17112386

  3. Preparation of an Efficient Ratiometric Fluorescent Nanoprobe (m-CDs@[Ru(bpy)3]2+) for Visual and Specific Detection of Hypochlorite on Site and in Living Cells.

    PubMed

    Zhan, Yuanjin; Luo, Fang; Guo, Longhua; Qiu, Bin; Lin, Yuhong; Li, Juan; Chen, Guonan; Lin, Zhenyu

    2017-11-22

    Hypochlorite (ClO - ) is one of the most important reactive oxygen species (ROS), which plays an important role in sustaining human innate immunity during microbial invasion. Moreover, ClO - is a powerful oxidizer for water treatment. The safety of drinking water is closely related to its content. Herein, m-phenylenediamine (mPD) is used as a precursor to prepare carbon dots (named m-CDs) with highly fluorescent quantum yield (31.58% in water), and our investigation shows that the strong fluorescent emission of m-CDs can be effectively quenched by ClO - . Based on these findings, we developed a novel fluorescent nanoprobe (m-CDs) for highly selective detection of ClO - . The linear range was from 0.05 to 7 μM (R 2 = 0.998), and the limit of detection (S/N = 3) was as low as 0.012 μM. Moreover, a portable agarose hydrogel solid matrix-based ratiometric fluorescent nanoprobe (m-CDs@[Ru(bpy) 3 ] 2+ ) sensor was subsequently developed for visual on-site detection of ClO - with the naked eyes under a UV lamp, suggesting its potential in practical application with low cost and excellent performance in water quality monitoring. Additionally, intracellular detection of exogenous ClO - was demonstrated via ratiometric imaging microscopy.

  4. Label-free visualization of collagen in submucosa as a potential diagnostic marker for early detection of colorectal cancer

    NASA Astrophysics Data System (ADS)

    Qiu, Jingting; Yang, Yinghong; Jiang, Weizhong; Feng, Changyin; Chen, Zhifen; Guan, Guoxian; Zhu, Xiaoqin; Zhuo, Shuangmu; Chen, Jianxin

    2014-09-01

    The collagen signature in colorectal submucosa is changed due to remodeling of the extracellular matrix during the malignant process and plays an important role in noninvasive early detection of human colorectal cancer. In this work, multiphoton microscopy (MPM) was used to monitor the changes of collagen in normal colorectal submucosa (NCS) and cancerous colorectal submucosa (CCS). What's more, the collagen content was quantitatively measured. It was found that in CCS the morphology of collagen becomes much looser and the collagen content is significantly reduced compared to NCS. These results suggest that MPM has the ability to provide collagen signature as a potential diagnostic marker for early detection of colorectal cancer.

  5. Collision Detection for Underwater ROV Manipulator Systems

    PubMed Central

    Rossi, Matija; Dooly, Gerard; Toal, Daniel

    2018-01-01

    Work-class ROVs equipped with robotic manipulators are extensively used for subsea intervention operations. Manipulators are teleoperated by human pilots relying on visual feedback from the worksite. Operating in a remote environment, with limited pilot perception and poor visibility, manipulator collisions which may cause significant damage are likely to happen. This paper presents a real-time collision detection algorithm for marine robotic manipulation. The proposed collision detection mechanism is developed, integrated into a commercial ROV manipulator control system, and successfully evaluated in simulations and experimental setup using a real industry standard underwater manipulator. The presented collision sensing solution has a potential to be a useful pilot assisting tool that can reduce the task load, operational time, and costs of subsea inspection, repair, and maintenance operations. PMID:29642396

  6. Collision Detection for Underwater ROV Manipulator Systems.

    PubMed

    Sivčev, Satja; Rossi, Matija; Coleman, Joseph; Omerdić, Edin; Dooly, Gerard; Toal, Daniel

    2018-04-06

    Work-class ROVs equipped with robotic manipulators are extensively used for subsea intervention operations. Manipulators are teleoperated by human pilots relying on visual feedback from the worksite. Operating in a remote environment, with limited pilot perception and poor visibility, manipulator collisions which may cause significant damage are likely to happen. This paper presents a real-time collision detection algorithm for marine robotic manipulation. The proposed collision detection mechanism is developed, integrated into a commercial ROV manipulator control system, and successfully evaluated in simulations and experimental setup using a real industry standard underwater manipulator. The presented collision sensing solution has a potential to be a useful pilot assisting tool that can reduce the task load, operational time, and costs of subsea inspection, repair, and maintenance operations.

  7. Spotting and tracking good biometrics with the human visual system

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Jenkins, Jeffrey; Hsu, Charles

    2011-06-01

    We mathematically model the mammalian Visual System's (VS) capability of spotting objects. How can a hawk see a tiny running rabbit from miles above ground? How could that rabbit see the approaching hawk? This predatorprey interaction draws parallels with spotting a familiar person in a crowd. We assume that mammal eyes use peripheral vision to perceive unexpected changes from our memory, and then use our central vision (fovea) to pay attention. The difference between an image and our memory of that image is usually small, mathematically known as a 'sparse representation'. The VS communicates with the brain using a finite reservoir of neurotransmittents, which produces an on-center and thus off-surround Hubel/Wiesel Mexican hat receptive field. This is the basis of our model. This change detection mechanism could drive our attention, allowing us to hit a curveball. If we are about to hit a baseball, what information extracted by our HVS tells us where to swing? Physical human features such as faces, irises, and fingerprints have been successfully used for identification (Biometrics) for decades, recently including voice and walking style for identification from further away. Biologically, humans must use a change detection strategy to achieve an ordered sparseness and use a sigmoid threshold for noisy measurements in our Hetero-Associative Memory [HAM] classifier for fault tolerant recall. Human biometrics is dynamic, and therefore involves more than just the surface, requiring a 3 dimensional measurement (i.e. Daugman/Gabor iris features). Such a measurement can be achieved using the partial coherence of a laser's reflection from a 3-D biometric surface, creating more degrees of freedom (d.o.f.) to meet the Army's challenge of distant Biometrics. Thus, one might be able to increase the standoff loss of less distinguished degrees of freedom (DOF).

  8. The use of immunochromatographic rapid test for soft tissue remains identification in order to distinguish between human and non-human origin.

    PubMed

    Gascho, Dominic; Morf, Nadja V; Thali, Michael J; Schaerli, Sarah

    2017-05-01

    Clear identification of soft tissue remains as being of non-human origin may be visually difficult in some cases e.g. due to decomposition. Thus, an additional examination is required. The use of an immunochromatographic rapid tests (IRT) device can be an easy solution with the additional advantage to be used directly at the site of discovery. The use of these test devices for detecting human blood at crime scenes is a common method. However, the IRT is specific not only for blood but also for differentiation between human and non-human soft tissue remains. In the following this method is discussed and validated by means of two forensic cases and several samples of various animals. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  9. Auditory enhancement of visual perception at threshold depends on visual abilities.

    PubMed

    Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène

    2011-06-17

    Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Detecting Solar-like Oscillations in Red Giants with Deep Learning

    NASA Astrophysics Data System (ADS)

    Hon, Marc; Stello, Dennis; Zinn, Joel C.

    2018-05-01

    Time-resolved photometry of tens of thousands of red giant stars from space missions like Kepler and K2 has created the need for automated asteroseismic analysis methods. The first and most fundamental step in such analysis is to identify which stars show oscillations. It is critical that this step be performed with no, or little, detection bias, particularly when performing subsequent ensemble analyses that aim to compare the properties of observed stellar populations with those from galactic models. However, an efficient, automated solution to this initial detection step still has not been found, meaning that expert visual inspection of data from each star is required to obtain the highest level of detections. Hence, to mimic how an expert eye analyzes the data, we use supervised deep learning to not only detect oscillations in red giants, but also to predict the location of the frequency at maximum power, ν max, by observing features in 2D images of power spectra. By training on Kepler data, we benchmark our deep-learning classifier against K2 data that are given detections by the expert eye, achieving a detection accuracy of 98% on K2 Campaign 6 stars and a detection accuracy of 99% on K2 Campaign 3 stars. We further find that the estimated uncertainty of our deep-learning-based ν max predictions is about 5%. This is comparable to human-level performance using visual inspection. When examining outliers, we find that the deep-learning results are more likely to provide robust ν max estimates than the classical model-fitting method.

  11. Early detection of glaucoma using fully automated disparity analysis of the optic nerve head (ONH) from stereo fundus images

    NASA Astrophysics Data System (ADS)

    Sharma, Archie; Corona, Enrique; Mitra, Sunanda; Nutter, Brian S.

    2006-03-01

    Early detection of structural damage to the optic nerve head (ONH) is critical in diagnosis of glaucoma, because such glaucomatous damage precedes clinically identifiable visual loss. Early detection of glaucoma can prevent progression of the disease and consequent loss of vision. Traditional early detection techniques involve observing changes in the ONH through an ophthalmoscope. Stereo fundus photography is also routinely used to detect subtle changes in the ONH. However, clinical evaluation of stereo fundus photographs suffers from inter- and intra-subject variability. Even the Heidelberg Retina Tomograph (HRT) has not been found to be sufficiently sensitive for early detection. A semi-automated algorithm for quantitative representation of the optic disc and cup contours by computing accumulated disparities in the disc and cup regions from stereo fundus image pairs has already been developed using advanced digital image analysis methodologies. A 3-D visualization of the disc and cup is achieved assuming camera geometry. High correlation among computer-generated and manually segmented cup to disc ratios in a longitudinal study involving 159 stereo fundus image pairs has already been demonstrated. However, clinical usefulness of the proposed technique can only be tested by a fully automated algorithm. In this paper, we present a fully automated algorithm for segmentation of optic cup and disc contours from corresponding stereo disparity information. Because this technique does not involve human intervention, it eliminates subjective variability encountered in currently used clinical methods and provides ophthalmologists with a cost-effective and quantitative method for detection of ONH structural damage for early detection of glaucoma.

  12. Detection of emotional faces: salient physical features guide effective visual search.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2008-08-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  13. Bending it like Beckham: how to visually fool the goalkeeper.

    PubMed

    Dessing, Joost C; Craig, Cathy M

    2010-10-06

    As bending free-kicks becomes the norm in modern day soccer, implications for goalkeepers have largely been ignored. Although it has been reported that poor sensitivity to visual acceleration makes it harder for expert goalkeepers to perceptually judge where the curved free-kicks will cross the goal line, it is unknown how this affects the goalkeeper's actual movements. Here, an in-depth analysis of goalkeepers' hand movements in immersive, interactive virtual reality shows that they do not fully account for spin-induced lateral ball acceleration. Hand movements were found to be biased in the direction of initial ball heading, and for curved free-kicks this resulted in biases in a direction opposite to those necessary to save the free-kick. These movement errors result in less time to cover a now greater distance to stop the ball entering the goal. These and other details of the interceptive behaviour are explained using a simple mathematical model which shows how the goalkeeper controls his movements online with respect to the ball's current heading direction. Furthermore our results and model suggest how visual landmarks, such as the goalposts in this instance, may constrain the extent of the movement biases. While it has previously been shown that humans can internalize the effects of gravitational acceleration, these results show that it is much more difficult for goalkeepers to account for spin-induced visual acceleration, which varies from situation to situation. The limited sensitivity of the human visual system for detecting acceleration, suggests that curved free-kicks are an important goal-scoring opportunity in the game of soccer.

  14. Bending It Like Beckham: How to Visually Fool the Goalkeeper

    PubMed Central

    2010-01-01

    Background As bending free-kicks becomes the norm in modern day soccer, implications for goalkeepers have largely been ignored. Although it has been reported that poor sensitivity to visual acceleration makes it harder for expert goalkeepers to perceptually judge where the curved free-kicks will cross the goal line, it is unknown how this affects the goalkeeper's actual movements. Methodology/Principal Findings Here, an in-depth analysis of goalkeepers' hand movements in immersive, interactive virtual reality shows that they do not fully account for spin-induced lateral ball acceleration. Hand movements were found to be biased in the direction of initial ball heading, and for curved free-kicks this resulted in biases in a direction opposite to those necessary to save the free-kick. These movement errors result in less time to cover a now greater distance to stop the ball entering the goal. These and other details of the interceptive behaviour are explained using a simple mathematical model which shows how the goalkeeper controls his movements online with respect to the ball's current heading direction. Furthermore our results and model suggest how visual landmarks, such as the goalposts in this instance, may constrain the extent of the movement biases. Conclusions While it has previously been shown that humans can internalize the effects of gravitational acceleration, these results show that it is much more difficult for goalkeepers to account for spin-induced visual acceleration, which varies from situation to situation. The limited sensitivity of the human visual system for detecting acceleration, suggests that curved free-kicks are an important goal-scoring opportunity in the game of soccer. PMID:20949130

  15. Human papillomavirus-based cervical cancer prevention: long-term results of a randomized screening trial.

    PubMed

    Denny, Lynette; Kuhn, Louise; Hu, Chih-Chi; Tsai, Wei-Yann; Wright, Thomas C

    2010-10-20

    Screen-and-treat approaches to cervical cancer prevention are an attractive option for low-resource settings, but data on their long-term efficacy are lacking. We evaluated the efficacy of two screen-and-treat approaches through 36 months of follow-up in a randomized trial. A total of 6637 unscreened South African women aged 35-65 years who were tested for the presence of high-risk human papillomavirus (HPV) DNA in cervical samples underwent visual inspection of the cervix using acetic acid staining and HIV serotesting. Of these, 6555 were randomly assigned to three study arms: 1) HPV-and-treat, in which all women with a positive HPV DNA test result underwent cryotherapy; 2) visual inspection-and-treat, in which all women with a positive visual inspection test result underwent cryotherapy; or 3) control, in which further evaluation or treatment was delayed for 6 months. All women underwent colposcopy with biopsy at 6 months. All women who were HPV DNA- or visual inspection-positive at enrollment, and a subset of all other women had extended follow-up to 36 months (n = 3639) with yearly colposcopy. The endpoint-cervical intraepithelial neoplasia grade 2 or worse (CIN2+)-was analyzed using actuarial life-table methods. All statistical tests were two-sided. After 36 months, there was a sustained statistically significant decrease in the cumulative detection of CIN2+ in the HPV-and-treat arm compared with the control arm (1.5% vs 5.6%, difference = 4.1%, 95% confidence interval [CI] = 2.8% to 5.3%, P < .001). The difference in the cumulative detection of CIN2+ in the visual inspection-and-treat arm compared with the control was less (3.8% vs 5.6%, difference = 1.8%, 95% CI = 0.4% to 3.2%, P = .002). Incident cases of CIN2+ (identified more than 12 months after enrollment) were less common in the HPV-and-treat arm (0.3%, 95% CI = 0.05% to 1.02%) than in the control (1.0%, 95% CI = 0.5% to 1.7%) or visual inspection-and-treat (1.3%, 95% CI = 0.8% to 2.1%) arms. In this trial, a screen-and-treat approach using HPV DNA testing identified and treated prevalent cases of CIN2+ and appeared to reduce the number of incident cases of CIN2+ that developed more than 12 months after cryotherapy.

  16. Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search

    ERIC Educational Resources Information Center

    Calvo, Manuel G.; Nummenmaa, Lauri

    2008-01-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…

  17. High resolution skin-like sensor capable of sensing and visualizing various sensations and three dimensional shape.

    PubMed

    Xu, Tianbai; Wang, Wenbo; Bian, Xiaolei; Wang, Xiaoxue; Wang, Xiaozhi; Luo, J K; Dong, Shurong

    2015-08-13

    Human skin contains multiple receptors, and is able to sense various stimuli such as temperature, pressure, force, corrosion etc, and to feel pains and the shape of objects. The development of skin-like sensors capable of sensing these stimuli is of great importance for various applications such as robots, touch detection, temperature monitoring, strain gauges etc. Great efforts have been made to develop high performance skin-like sensors, but they are far from perfect and much inferior to human skin as most of them can only sense one stimulus with focus on pressure (strain) or temperature, and are unable to visualize sensations and shape of objects. Here we report a skin-like sensor which imitates real skin with multiple receptors, and a new concept of pain sensation. The sensor with very high resolution not only has multiple sensations for touch, pressure, temperature, but also is able to sense various pains and reproduce the three dimensional shape of an object in contact.

  18. Attention during natural vision warps semantic representation across the human brain.

    PubMed

    Çukur, Tolga; Nishimoto, Shinji; Huth, Alexander G; Gallant, Jack L

    2013-06-01

    Little is known about how attention changes the cortical representation of sensory information in humans. On the basis of neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue, we used functional magnetic resonance imaging to measure how semantic representation changed during visual search for different object categories in natural movies. We found that many voxels across occipito-temporal and fronto-parietal cortex shifted their tuning toward the attended category. These tuning shifts expanded the representation of the attended category and of semantically related, but unattended, categories, and compressed the representation of categories that were semantically dissimilar to the target. Attentional warping of semantic representation occurred even when the attended category was not present in the movie; thus, the effect was not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during natural vision.

  19. Association and comparison between visual inspection and bitewing radiography for the detection of recurrent dental caries under restorations.

    PubMed

    Lino, José R; Ramos-Jorge, Joana; Coelho, Valéria Silveira; Ramos-Jorge, Maria L; Moysés, Marcos R; Ribeiro, José C R

    2015-08-01

    The aim of the present study was to investigate, in posterior teeth, the association between the characteristics of the margins of a restoration visually inspected and the presence, under restorations, of recurrent or residual dental caries detected by radiographic examination. Furthermore, the agreement between visual inspection and radiographs to detect dental caries was assessed. Eighty-five permanent molars and premolars with resin restorations on the interproximal and/or occlusal faces, from 18 patients, were submitted for visual inspection and radiographic examination. The visual inspection involved the criteria of the International Caries Detection and Assessment System (ICDAS). Bitewing radiographs were used for the radiographic examination. Logistic regression was used to analyse the association between the characteristics of the margins of a restoration assessed by visual inspection (absence of dental caries, or early, established, inactive and active lesions) and the presence of recurrent caries detected by radiographs. Kappa coefficients were calculated for determining agreement between the two methods. The Kappa coefficient for agreement between visual inspection and radiographic examination was 0.19. Established lesions [odds ratio (OR) = 9.89; 95% confidence interval (95% CI): 2.94-33.25; P < 0.05] and lesion activity (OR = 2.57; 95% CI: 0.91-7.27; P < 0.05) detected by visual inspection, were associated with recurrent or residual dental caries detected by radiographs. Restorations with established and active lesions at the margins had a greater chance of exhibiting recurrent or residual lesions in the radiographic examination. The present findings demonstrate that restorations with established and active lesions at the margins when visually inspected often require removal and retreatment. © 2015 FDI World Dental Federation.

  20. Theories of Visual Rhetoric: Looking at the Human Genome.

    ERIC Educational Resources Information Center

    Rosner, Mary

    2001-01-01

    Considers how visuals are constructions that are products of a writer's interpretation with its own "power-laden agenda." Reviews the current approach taken by composition scholars, surveys richer interdisciplinary work on visuals, and (by using visuals connected with the Human Genome Project) models an analysis of visuals as rhetoric.…

Top