High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.
Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min
2012-01-01
The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.
Lee, Daniel J; Recabal, Pedro; Sjoberg, Daniel D; Thong, Alan; Lee, Justin K; Eastham, James A; Scardino, Peter T; Vargas, Hebert Alberto; Coleman, Jonathan; Ehdaie, Behfar
2016-09-01
We compared the diagnostic outcomes of magnetic resonance-ultrasound fusion and visually targeted biopsy for targeting regions of interest on prostate multiparametric magnetic resonance imaging. Patients presenting for prostate biopsy with regions of interest on multiparametric magnetic resonance imaging underwent magnetic resonance imaging targeted biopsy. For each region of interest 2 visually targeted cores were obtained, followed by 2 cores using a magnetic resonance-ultrasound fusion device. Our primary end point was the difference in the detection of high grade (Gleason 7 or greater) and any grade cancer between visually targeted and magnetic resonance-ultrasound fusion, investigated using McNemar's method. Secondary end points were the difference in detection rate by biopsy location using a logistic regression model and the difference in median cancer length using the Wilcoxon signed rank test. We identified 396 regions of interest in 286 men. The difference in the detection of high grade cancer between magnetic resonance-ultrasound fusion biopsy and visually targeted biopsy was -1.4% (95% CI -6.4 to 3.6, p=0.6) and for any grade cancer the difference was 3.5% (95% CI -1.9 to 8.9, p=0.2). Median cancer length detected by magnetic resonance-ultrasound fusion and visually targeted biopsy was 5.5 vs 5.8 mm, respectively (p=0.8). Magnetic resonance-ultrasound fusion biopsy detected 15% more cancers in the transition zone (p=0.046) and visually targeted biopsy detected 11% more high grade cancer at the prostate base (p=0.005). Only 52% of all high grade cancers were detected by both techniques. We found no evidence of a significant difference in the detection of high grade or any grade cancer between visually targeted and magnetic resonance-ultrasound fusion biopsy. However, the performance of each technique varied in specific biopsy locations and the outcomes of both techniques were complementary. Combining visually targeted biopsy and magnetic resonance-ultrasound fusion biopsy may optimize the detection of prostate cancer. Copyright © 2016 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Contingent capture of involuntary visual attention interferes with detection of auditory stimuli
Kamke, Marc R.; Harris, Jill
2014-01-01
The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945
Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.
Kamke, Marc R; Harris, Jill
2014-01-01
The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.
MutScan: fast detection and visualization of target mutations by scanning FASTQ data.
Chen, Shifu; Huang, Tanxiao; Wen, Tiexiang; Li, Hong; Xu, Mingyan; Gu, Jia
2018-01-22
Some types of clinical genetic tests, such as cancer testing using circulating tumor DNA (ctDNA), require sensitive detection of known target mutations. However, conventional next-generation sequencing (NGS) data analysis pipelines typically involve different steps of filtering, which may cause miss-detection of key mutations with low frequencies. Variant validation is also indicated for key mutations detected by bioinformatics pipelines. Typically, this process can be executed using alignment visualization tools such as IGV or GenomeBrowse. However, these tools are too heavy and therefore unsuitable for validating mutations in ultra-deep sequencing data. We developed MutScan to address problems of sensitive detection and efficient validation for target mutations. MutScan involves highly optimized string-searching algorithms, which can scan input FASTQ files to grab all reads that support target mutations. The collected supporting reads for each target mutation will be piled up and visualized using web technologies such as HTML and JavaScript. Algorithms such as rolling hash and bloom filter are applied to accelerate scanning and make MutScan applicable to detect or visualize target mutations in a very fast way. MutScan is a tool for the detection and visualization of target mutations by only scanning FASTQ raw data directly. Compared to conventional pipelines, this offers a very high performance, executing about 20 times faster, and offering maximal sensitivity since it can grab mutations with even one single supporting read. MutScan visualizes detected mutations by generating interactive pile-ups using web technologies. These can serve to validate target mutations, thus avoiding false positives. Furthermore, MutScan can visualize all mutation records in a VCF file to HTML pages for cloud-friendly VCF validation. MutScan is an open source tool available at GitHub: https://github.com/OpenGene/MutScan.
NASA Astrophysics Data System (ADS)
Duong, Tuan A.; Duong, Nghi; Le, Duong
2017-01-01
In this paper, we present an integration technique using a bio-inspired, control-based visual and olfactory receptor system to search for elusive targets in practical environments where the targets cannot be seen obviously by either sensory data. Bio-inspired Visual System is based on a modeling of extended visual pathway which consists of saccadic eye movements and visual pathway (vertebrate retina, lateral geniculate nucleus and visual cortex) to enable powerful target detections of noisy, partial, incomplete visual data. Olfactory receptor algorithm, namely spatial invariant independent component analysis, that was developed based on data of old factory receptor-electronic nose (enose) of Caltech, is adopted to enable the odorant target detection in an unknown environment. The integration of two systems is a vital approach and sets up a cornerstone for effective and low-cost of miniaturized UAVs or fly robots for future DOD and NASA missions, as well as for security systems in Internet of Things environments.
Remote sensing image ship target detection method based on visual attention model
NASA Astrophysics Data System (ADS)
Sun, Yuejiao; Lei, Wuhu; Ren, Xiaodong
2017-11-01
The traditional methods of detecting ship targets in remote sensing images mostly use sliding window to search the whole image comprehensively. However, the target usually occupies only a small fraction of the image. This method has high computational complexity for large format visible image data. The bottom-up selective attention mechanism can selectively allocate computing resources according to visual stimuli, thus improving the computational efficiency and reducing the difficulty of analysis. Considering of that, a method of ship target detection in remote sensing images based on visual attention model was proposed in this paper. The experimental results show that the proposed method can reduce the computational complexity while improving the detection accuracy, and improve the detection efficiency of ship targets in remote sensing images.
Visual performance on detection tasks with double-targets of the same and different difficulty.
Chan, Alan H S; Courtney, Alan J; Ma, C W
2002-10-20
This paper reports a study of measurement of horizontal visual sensitivity limits for 16 subjects in single-target and double-targets detection tasks. Two phases of tests were conducted in the double-targets task; targets of the same difficulty were tested in phase one while targets of different difficulty were tested in phase two. The range of sensitivity for the double-targets test was found to be smaller than that for single-target in both the same and different target difficulty cases. The presence of another target was found to affect performance to a marked degree. Interference effect of the difficult target on detection of the easy one was greater than that of the easy one on the detection of the difficult one. Performance decrement was noted when correct percentage detection was plotted against eccentricity of target in both the single-target and double-targets tests. Nevertheless, the non-significant correlation found between the performance for the two tasks demonstrated that it was impossible to predict quantitatively ability for detection of double targets from the data for single targets. This indicated probable problems in generalizing data for single target visual lobes to those for multiple targets. Also lobe area values obtained from measurements using a single-target task cannot be applied in a mathematical model for situations with multiple occurrences of targets.
Effects of Alzheimer’s Disease on Visual Target Detection: A “Peripheral Bias”
Vallejo, Vanessa; Cazzoli, Dario; Rampa, Luca; Zito, Giuseppe A.; Feuerstein, Flurin; Gruber, Nicole; Müri, René M.; Mosimann, Urs P.; Nef, Tobias
2016-01-01
Visual exploration is an omnipresent activity in everyday life, and might represent an important determinant of visual attention deficits in patients with Alzheimer’s Disease (AD). The present study aimed at investigating visual search performance in AD patients, in particular target detection in the far periphery, in daily living scenes. Eighteen AD patients and 20 healthy controls participated in the study. They were asked to freely explore a hemispherical screen, covering ±90°, and to respond to targets presented at 10°, 30°, and 50° eccentricity, while their eye movements were recorded. Compared to healthy controls, AD patients recognized less targets appearing in the center. No difference was found in target detection in the periphery. This pattern was confirmed by the fixation distribution analysis. These results show a neglect for the central part of the visual field for AD patients and provide new insights by mean of a search task involving a larger field of view. PMID:27582704
Effects of Alzheimer's Disease on Visual Target Detection: A "Peripheral Bias".
Vallejo, Vanessa; Cazzoli, Dario; Rampa, Luca; Zito, Giuseppe A; Feuerstein, Flurin; Gruber, Nicole; Müri, René M; Mosimann, Urs P; Nef, Tobias
2016-01-01
Visual exploration is an omnipresent activity in everyday life, and might represent an important determinant of visual attention deficits in patients with Alzheimer's Disease (AD). The present study aimed at investigating visual search performance in AD patients, in particular target detection in the far periphery, in daily living scenes. Eighteen AD patients and 20 healthy controls participated in the study. They were asked to freely explore a hemispherical screen, covering ±90°, and to respond to targets presented at 10°, 30°, and 50° eccentricity, while their eye movements were recorded. Compared to healthy controls, AD patients recognized less targets appearing in the center. No difference was found in target detection in the periphery. This pattern was confirmed by the fixation distribution analysis. These results show a neglect for the central part of the visual field for AD patients and provide new insights by mean of a search task involving a larger field of view.
Infrared dim target detection based on visual attention
NASA Astrophysics Data System (ADS)
Wang, Xin; Lv, Guofang; Xu, Lizhong
2012-11-01
Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. Based on human visual attention mechanisms, an automatic detection algorithm for infrared dim target is presented. After analyzing the characteristics of infrared dim target images, the method firstly designs Difference of Gaussians (DoG) filters to compute the saliency map. Then the salient regions where the potential targets exist in are extracted by searching through the saliency map with a control mechanism of winner-take-all (WTA) competition and inhibition-of-return (IOR). At last, these regions are identified by the characteristics of the dim IR targets, so the true targets are detected, and the spurious objects are rejected. The experiments are performed for some real-life IR images, and the results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.
Aghamohammadi, Amirhossein; Ang, Mei Choo; A Sundararajan, Elankovan; Weng, Ng Kok; Mogharrebi, Marzieh; Banihashem, Seyed Yashar
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods.
2018-01-01
Visual tracking in aerial videos is a challenging task in computer vision and remote sensing technologies due to appearance variation difficulties. Appearance variations are caused by camera and target motion, low resolution noisy images, scale changes, and pose variations. Various approaches have been proposed to deal with appearance variation difficulties in aerial videos, and amongst these methods, the spatiotemporal saliency detection approach reported promising results in the context of moving target detection. However, it is not accurate for moving target detection when visual tracking is performed under appearance variations. In this study, a visual tracking method is proposed based on spatiotemporal saliency and discriminative online learning methods to deal with appearance variations difficulties. Temporal saliency is used to represent moving target regions, and it was extracted based on the frame difference with Sauvola local adaptive thresholding algorithms. The spatial saliency is used to represent the target appearance details in candidate moving regions. SLIC superpixel segmentation, color, and moment features can be used to compute feature uniqueness and spatial compactness of saliency measurements to detect spatial saliency. It is a time consuming process, which prompted the development of a parallel algorithm to optimize and distribute the saliency detection processes that are loaded into the multi-processors. Spatiotemporal saliency is then obtained by combining the temporal and spatial saliencies to represent moving targets. Finally, a discriminative online learning algorithm was applied to generate a sample model based on spatiotemporal saliency. This sample model is then incrementally updated to detect the target in appearance variation conditions. Experiments conducted on the VIVID dataset demonstrated that the proposed visual tracking method is effective and is computationally efficient compared to state-of-the-art methods. PMID:29438421
Camouflage target detection via hyperspectral imaging plus information divergence measurement
NASA Astrophysics Data System (ADS)
Chen, Yuheng; Chen, Xinhua; Zhou, Jiankang; Ji, Yiqun; Shen, Weimin
2016-01-01
Target detection is one of most important applications in remote sensing. Nowadays accurate camouflage target distinction is often resorted to spectral imaging technique due to its high-resolution spectral/spatial information acquisition ability as well as plenty of data processing methods. In this paper, hyper-spectral imaging technique together with spectral information divergence measure method is used to solve camouflage target detection problem. A self-developed visual-band hyper-spectral imaging device is adopted to collect data cubes of certain experimental scene before spectral information divergences are worked out so as to discriminate target camouflage and anomaly. Full-band information divergences are measured to evaluate target detection effect visually and quantitatively. Information divergence measurement is proved to be a low-cost and effective tool for target detection task and can be further developed to other target detection applications beyond spectral imaging technique.
Crowding with detection and coarse discrimination of simple visual features.
Põder, Endel
2008-04-24
Some recent studies have suggested that there are actually no crowding effects with detection and coarse discrimination of simple visual features. The present study tests the generality of this idea. A target Gabor patch, surrounded by either 2 or 6 flanker Gabors, was presented briefly at 4 deg eccentricity of the visual field. Each Gabor patch was oriented either vertically or horizontally (selected randomly). Observers' task was either to detect the presence of the target (presented with probability 0.5) or to identify the orientation of the target. The target-flanker distance was varied. Results were similar for the two tasks but different for 2 and 6 flankers. The idea that feature detection and coarse discrimination are immune to crowding may be valid for the two-flanker condition only. With six flankers, a normal crowding effect was observed. It is suggested that the complexity of the full pattern (target plus flankers) could explain the difference.
Valerio, Massimo; McCartan, Neil; Freeman, Alex; Punwani, Shonit; Emberton, Mark; Ahmed, Hashim U
2015-10-01
Targeted biopsy based on cognitive or software magnetic resonance imaging (MRI) to transrectal ultrasound registration seems to increase the detection rate of clinically significant prostate cancer as compared with standard biopsy. However, these strategies have not been directly compared against an accurate test yet. The aim of this study was to obtain pilot data on the diagnostic ability of visually directed targeted biopsy vs. software-based targeted biopsy, considering transperineal template mapping (TPM) biopsy as the reference test. Prospective paired cohort study included 50 consecutive men undergoing TPM with one or more visible targets detected on preoperative multiparametric MRI. Targets were contoured on the Biojet software. Patients initially underwent software-based targeted biopsies, then visually directed targeted biopsies, and finally systematic TPM. The detection rate of clinically significant disease (Gleason score ≥3+4 and/or maximum cancer core length ≥4mm) of one strategy against another was compared by 3×3 contingency tables. Secondary analyses were performed using a less stringent threshold of significance (Gleason score ≥4+3 and/or maximum cancer core length ≥6mm). Median age was 68 (interquartile range: 63-73); median prostate-specific antigen level was 7.9ng/mL (6.4-10.2). A total of 79 targets were detected with a mean of 1.6 targets per patient. Of these, 27 (34%), 28 (35%), and 24 (31%) were scored 3, 4, and 5, respectively. At a patient level, the detection rate was 32 (64%), 34 (68%), and 38 (76%) for visually directed targeted, software-based biopsy, and TPM, respectively. Combining the 2 targeted strategies would have led to detection rate of 39 (78%). At a patient level and at a target level, software-based targeted biopsy found more clinically significant diseases than did visually directed targeted biopsy, although this was not statistically significant (22% vs. 14%, P = 0.48; 51.9% vs. 44.3%, P = 0.24). Secondary analysis showed similar results. Based on these findings, a paired cohort study enrolling at least 257 men would verify whether this difference is statistically significant. The diagnostic ability of software-based targeted biopsy and visually directed targeted biopsy seems almost comparable, although utility and efficiency both seem to be slightly in favor of the software-based strategy. Ongoing trials are sufficiently powered to prove or disprove these findings. Copyright © 2015 Elsevier Inc. All rights reserved.
Virtual reality method to analyze visual recognition in mice.
Young, Brent Kevin; Brennan, Jayden Nicole; Wang, Ping; Tian, Ning
2018-01-01
Behavioral tests have been extensively used to measure the visual function of mice. To determine how precisely mice perceive certain visual cues, it is necessary to have a quantifiable measurement of their behavioral responses. Recently, virtual reality tests have been utilized for a variety of purposes, from analyzing hippocampal cell functionality to identifying visual acuity. Despite the widespread use of these tests, the training requirement for the recognition of a variety of different visual targets, and the performance of the behavioral tests has not been thoroughly characterized. We have developed a virtual reality behavior testing approach that can essay a variety of different aspects of visual perception, including color/luminance and motion detection. When tested for the ability to detect a color/luminance target or a moving target, mice were able to discern the designated target after 9 days of continuous training. However, the quality of their performance is significantly affected by the complexity of the visual target, and their ability to navigate on a spherical treadmill. Importantly, mice retained memory of their visual recognition for at least three weeks after the end of their behavioral training.
Systematic distortions of perceptual stability investigated using immersive virtual reality
Tcheang, Lili; Gilson, Stuart J.; Glennerster, Andrew
2010-01-01
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers under-estimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an under-estimation of distance walked. We discuss implications for theories of a task-independent representation of visual space. PMID:15845248
Spatial Probability Dynamically Modulates Visual Target Detection in Chickens
Sridharan, Devarajan; Ramamurthy, Deepa L.; Knudsen, Eric I.
2013-01-01
The natural world contains a rich and ever-changing landscape of sensory information. To survive, an organism must be able to flexibly and rapidly locate the most relevant sources of information at any time. Humans and non-human primates exploit regularities in the spatial distribution of relevant stimuli (targets) to improve detection at locations of high target probability. Is the ability to flexibly modify behavior based on visual experience unique to primates? Chickens (Gallus domesticus) were trained on a multiple alternative Go/NoGo task to detect a small, briefly-flashed dot (target) in each of the quadrants of the visual field. When targets were presented with equal probability (25%) in each quadrant, chickens exhibited a distinct advantage for detecting targets at lower, relative to upper, hemifield locations. Increasing the probability of presentation in the upper hemifield locations (to 80%) dramatically improved detection performance at these locations to be on par with lower hemifield performance. Finally, detection performance in the upper hemifield changed on a rapid timescale, improving with successive target detections, and declining with successive detections at the diagonally opposite location in the lower hemifield. These data indicate the action of a process that in chickens, as in primates, flexibly and dynamically modulates detection performance based on the spatial probabilities of sensory stimuli as well as on recent performance history. PMID:23734188
Decreased visual detection during subliminal stimulation.
Bareither, Isabelle; Villringer, Arno; Busch, Niko A
2014-10-17
What is the perceptual fate of invisible stimuli-are they processed at all and does their processing have consequences for the perception of other stimuli? As has been shown previously in the somatosensory system, even stimuli that are too weak to be consciously detected can influence our perception: Subliminal stimulation impairs perception of near-threshold stimuli and causes a functional deactivation in the somatosensory cortex. In a recent study, we showed that subliminal visual stimuli lead to similar responses, indicated by an increase in alpha-band power as measured with electroencephalography (EEG). In the current study, we investigated whether a behavioral inhibitory mechanism also exists within the visual system. We tested the detection of peripheral visual target stimuli under three different conditions: Target stimuli were presented alone or embedded in a concurrent train of subliminal stimuli either at the same location as the target or in the opposite hemifield. Subliminal stimuli were invisible due to their low contrast, not due to a masking procedure. We demonstrate that target detection was impaired by the subliminal stimuli, but only when they were presented at the same location as the target. This finding indicates that subliminal, low-intensity stimuli induce a similar inhibitory effect in the visual system as has been observed in the somatosensory system. In line with previous reports, we propose that the function underlying this effect is the inhibition of spurious noise by the visual system. © 2014 ARVO.
Neural Dynamics Underlying Target Detection in the Human Brain
Bansal, Arjun K.; Madhavan, Radhika; Agam, Yigal; Golby, Alexandra; Madsen, Joseph R.
2014-01-01
Sensory signals must be interpreted in the context of goals and tasks. To detect a target in an image, the brain compares input signals and goals to elicit the correct behavior. We examined how target detection modulates visual recognition signals by recording intracranial field potential responses from 776 electrodes in 10 epileptic human subjects. We observed reliable differences in the physiological responses to stimuli when a cued target was present versus absent. Goal-related modulation was particularly strong in the inferior temporal and fusiform gyri, two areas important for object recognition. Target modulation started after 250 ms post stimulus, considerably after the onset of visual recognition signals. While broadband signals exhibited increased or decreased power, gamma frequency power showed predominantly increases during target presence. These observations support models where task goals interact with sensory inputs via top-down signals that influence the highest echelons of visual processing after the onset of selective responses. PMID:24553944
A framework for small infrared target real-time visual enhancement
NASA Astrophysics Data System (ADS)
Sun, Xiaoliang; Long, Gucan; Shang, Yang; Liu, Xiaolin
2015-03-01
This paper proposes a framework for small infrared target real-time visual enhancement. The framework is consisted of three parts: energy accumulation for small infrared target enhancement, noise suppression and weighted fusion. Dynamic programming based track-before-detection algorithm is adopted in the energy accumulation to detect the target accurately and enhance the target's intensity notably. In the noise suppression, the target region is weighted by a Gaussian mask according to the target's Gaussian shape. In order to fuse the processed target region and unprocessed background smoothly, the intensity in the target region is treated as weight in the fusion. Experiments on real small infrared target images indicate that the framework proposed in this paper can enhances the small infrared target markedly and improves the image's visual quality notably. The proposed framework outperforms tradition algorithms in enhancing the small infrared target, especially for image in which the target is hardly visible.
Improving Target Detection in Visual Search Through the Augmenting Multi-Sensory Cues
2013-01-01
target detection, visual search James Merlo, Joseph E. Mercado , Jan B.F. Van Erp, Peter A. Hancock University of Central Florida 12201 Research Parkway...were controlled by a purpose-created, LabView- based software computer program that synchronised the respective displays and recorded response times and
Direction of Auditory Pitch-Change Influences Visual Search for Slope From Graphs.
Parrott, Stacey; Guzman-Martinez, Emmanuel; Orte, Laura; Grabowecky, Marcia; Huntington, Mark D; Suzuki, Satoru
2015-01-01
Linear trend (slope) is important information conveyed by graphs. We investigated how sounds influenced slope detection in a visual search paradigm. Four bar graphs or scatter plots were presented on each trial. Participants looked for a positive-slope or a negative-slope target (in blocked trials), and responded to targets in a go or no-go fashion. For example, in a positive-slope-target block, the target graph displayed a positive slope while other graphs displayed negative slopes (a go trial), or all graphs displayed negative slopes (a no-go trial). When an ascending or descending sound was presented concurrently, ascending sounds slowed detection of negative-slope targets whereas descending sounds slowed detection of positive-slope targets. The sounds had no effect when they immediately preceded the visual search displays, suggesting that the results were due to crossmodal interaction rather than priming. The sounds also had no effect when targets were words describing slopes, such as "positive," "negative," "increasing," or "decreasing," suggesting that the results were unlikely due to semantic-level interactions. Manipulations of spatiotemporal similarity between sounds and graphs had little effect. These results suggest that ascending and descending sounds influence visual search for slope based on a general association between the direction of auditory pitch-change and visual linear trend.
Sawada, Reiko; Sato, Wataru; Toichi, Motomi; Fushiki, Tohru
2017-01-01
Rapid detection of food is crucial for the survival of organisms. However, previous visual search studies have reported discrepant results regarding the detection speeds for food vs. non-food items; some experiments showed faster detection of food than non-food, whereas others reported null findings concerning any speed advantage for the detection of food vs. non-food. Moreover, although some previous studies showed that fat content can affect visual attention for food, the effect of fat content on the detection of food remains unclear. To investigate these issues, we measured reaction times (RTs) during a visual search task in which participants with normal weight detected high-fat food (i.e., fast food), low-fat food (i.e., Japanese diet), and non-food (i.e., kitchen utensils) targets within crowds of non-food distractors (i.e., cars). Results showed that RTs for food targets were shorter than those for non-food targets. Moreover, the RTs for high-fat food were shorter than those for low-fat food. These results suggest that food is more rapidly detected than non-food within the environment and that a higher fat content in food facilitates rapid detection. PMID:28690568
Sawada, Reiko; Sato, Wataru; Toichi, Motomi; Fushiki, Tohru
2017-01-01
Rapid detection of food is crucial for the survival of organisms. However, previous visual search studies have reported discrepant results regarding the detection speeds for food vs. non-food items; some experiments showed faster detection of food than non-food, whereas others reported null findings concerning any speed advantage for the detection of food vs. non-food. Moreover, although some previous studies showed that fat content can affect visual attention for food, the effect of fat content on the detection of food remains unclear. To investigate these issues, we measured reaction times (RTs) during a visual search task in which participants with normal weight detected high-fat food (i.e., fast food), low-fat food (i.e., Japanese diet), and non-food (i.e., kitchen utensils) targets within crowds of non-food distractors (i.e., cars). Results showed that RTs for food targets were shorter than those for non-food targets. Moreover, the RTs for high-fat food were shorter than those for low-fat food. These results suggest that food is more rapidly detected than non-food within the environment and that a higher fat content in food facilitates rapid detection.
NASA Astrophysics Data System (ADS)
Lee, Ai Cheng; Ye, Jian-Shan; Ngin Tan, Swee; Poenar, Daniel P.; Sheu, Fwu-Shan; Kiat Heng, Chew; Meng Lim, Tit
2007-11-01
A novel carbon nanotube (CNT) derived label capable of dramatic signal amplification of nucleic acid detection and direct visual detection of target hybridization has been developed. Highly sensitive colorimetric detection of human acute lymphocytic leukemia (ALL) related oncogene sequences amplified by the novel CNT-based label was demonstrated. Atomic force microscope (AFM) images confirmed that a monolayer of horseradish peroxidase and detection probe molecules was immobilized along the carboxylated CNT carrier. The resulting CNT labels significantly enhanced the nucleic acid assay sensitivity by at least 1000 times compared to that of conventional labels used in enzyme-linked oligosorbent assay (ELOSA). An excellent detection limit of 1 × 10-12 M (60 × 10-18 mol in 60 µl) and a four-order wide dynamic range of target concentration were achieved. Hybridizations using these labels were coupled to a concentration-dependent formation of visible dark aggregates. Targets can thus be detected simply with visual inspection, eliminating the need for expensive and sophisticated detection systems. The approach holds promise for ultrasensitive and low cost visual inspection and colorimetric nucleic acid detection in point-of-care and early disease diagnostic application.
Measured Visual Motion Sensitivity at Fixed Contrast in the Periphery and Far Periphery
2017-08-01
group Soldier performance. Soldier performance depends on visual detection of enemy personnel and materiel. Vision modeling in IWARS is neither...a highly time-critical and order- dependent activity, these unrealistic characterizations of target detection time and order severely limit the...recognize that MVTs should depend on target contrast, so we selected a target design different from that used in the Monaco et al. (2007) study. Based
NASA Astrophysics Data System (ADS)
Wan, Weibing; Yuan, Lingfeng; Zhao, Qunfei; Fang, Tao
2018-01-01
Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets.
TargetVue: Visual Analysis of Anomalous User Behaviors in Online Communication Systems.
Cao, Nan; Shi, Conglei; Lin, Sabrina; Lu, Jie; Lin, Yu-Ru; Lin, Ching-Yung
2016-01-01
Users with anomalous behaviors in online communication systems (e.g. email and social medial platforms) are potential threats to society. Automated anomaly detection based on advanced machine learning techniques has been developed to combat this issue; challenges remain, though, due to the difficulty of obtaining proper ground truth for model training and evaluation. Therefore, substantial human judgment on the automated analysis results is often required to better adjust the performance of anomaly detection. Unfortunately, techniques that allow users to understand the analysis results more efficiently, to make a confident judgment about anomalies, and to explore data in their context, are still lacking. In this paper, we propose a novel visual analysis system, TargetVue, which detects anomalous users via an unsupervised learning model and visualizes the behaviors of suspicious users in behavior-rich context through novel visualization designs and multiple coordinated contextual views. Particularly, TargetVue incorporates three new ego-centric glyphs to visually summarize a user's behaviors which effectively present the user's communication activities, features, and social interactions. An efficient layout method is proposed to place these glyphs on a triangle grid, which captures similarities among users and facilitates comparisons of behaviors of different users. We demonstrate the power of TargetVue through its application in a social bot detection challenge using Twitter data, a case study based on email records, and an interview with expert users. Our evaluation shows that TargetVue is beneficial to the detection of users with anomalous communication behaviors.
Effects of age and eccentricity on visual target detection.
Gruber, Nicole; Müri, René M; Mosimann, Urs P; Bieri, Rahel; Aeschimann, Andrea; Zito, Giuseppe A; Urwyler, Prabitha; Nyffeler, Thomas; Nef, Tobias
2013-01-01
The aim of this study was to examine the effects of aging and target eccentricity on a visual search task comprising 30 images of everyday life projected into a hemisphere, realizing a ±90° visual field. The task performed binocularly allowed participants to freely move their eyes to scan images for an appearing target or distractor stimulus (presented at 10°; 30°, and 50° eccentricity). The distractor stimulus required no response, while the target stimulus required acknowledgment by pressing the response button. One hundred and seventeen healthy subjects (mean age = 49.63 years, SD = 17.40 years, age range 20-78 years) were studied. The results show that target detection performance decreases with age as well as with increasing eccentricity, especially for older subjects. Reaction time also increases with age and eccentricity, but in contrast to target detection, there is no interaction between age and eccentricity. Eye movement analysis showed that younger subjects exhibited a passive search strategy while older subjects exhibited an active search strategy probably as a compensation for their reduced peripheral detection performance.
Perceptual learning improves contrast sensitivity, visual acuity, and foveal crowding in amblyopia.
Barollo, Michele; Contemori, Giulio; Battaglini, Luca; Pavan, Andrea; Casco, Clara
2017-01-01
Amblyopic observers present abnormal spatial interactions between a low-contrast sinusoidal target and high-contrast collinear flankers. It has been demonstrated that perceptual learning (PL) can modulate these low-level lateral interactions, resulting in improved visual acuity and contrast sensitivity. We measured the extent and duration of generalization effects to various spatial tasks (i.e., visual acuity, Vernier acuity, and foveal crowding) through PL on the target's contrast detection. Amblyopic observers were trained on a contrast-detection task for a central target (i.e., a Gabor patch) flanked above and below by two high-contrast Gabor patches. The pre- and post-learning tasks included lateral interactions at different target-to-flankers separations (i.e., 2, 3, 4, 8λ) and included a range of spatial frequencies and stimulus durations as well as visual acuity, Vernier acuity, contrast-sensitivity function, and foveal crowding. The results showed that perceptual training reduced the target's contrast-detection thresholds more for the longest target-to-flanker separation (i.e., 8λ). We also found generalization of PL to different stimuli and tasks: contrast sensitivity for both trained and untrained spatial frequencies, visual acuity for Sloan letters, and foveal crowding, and partially for Vernier acuity. Follow-ups after 5-7 months showed not only complete maintenance of PL effects on visual acuity and contrast sensitivity function but also further improvement in these tasks. These results suggest that PL improves facilitatory lateral interactions in amblyopic observers, which usually extend over larger separations than in typical foveal vision. The improvement in these basic visual spatial operations leads to a more efficient capability of performing spatial tasks involving high levels of visual processing, possibly due to the refinement of bottom-up and top-down networks of visual areas.
Making the invisible visible: verbal but not visual cues enhance visual detection.
Lupyan, Gary; Spivey, Michael J
2010-07-07
Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.
NASA Astrophysics Data System (ADS)
Khosla, Deepak; Huber, David J.; Martin, Kevin
2017-05-01
This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).
Bullets versus burgers: is it threat or relevance that captures attention?
de Oca, Beatrice M; Black, Alison A
2013-01-01
Previous studies have found that potentially dangerous stimuli are better at capturing attention than neutral stimuli, a finding sometimes called the threat superiority effect. However, non-threatening stimuli also capture attention in many studies of visual attention. In Experiment 1, the relevance superiority effect was tested with a visual search task comparing detection times for threatening stimuli (guns), pleasant but motivationally relevant stimuli (food), and neutral stimuli (flowers and chairs). Gun targets were detected more rapidly than both types of neutral targets, whereas food targets were detected more quickly than the neutral chair targets only. Guns were detected more rapidly than food. In Experiment 2, threatening targets (guns and snakes), pleasant but motivationally relevant targets (money and food), and neutral targets (trees and couches) were all presented with the same neutral distractors (cactus and pots) in order to control for the valence of the distractor stimulus across the three categories of target stimuli. Threatening and pleasant target categories facilitated attention relative to neutral targets. The results support the view that both threatening and pleasant pictures can be detected more rapidly than neutral targets.
Target detection in insects: optical, neural and behavioral optimizations.
Gonzalez-Bellido, Paloma T; Fabian, Samuel T; Nordström, Karin
2016-12-01
Motion vision provides important cues for many tasks. Flying insects, for example, may pursue small, fast moving targets for mating or feeding purposes, even when these are detected against self-generated optic flow. Since insects are small, with size-constrained eyes and brains, they have evolved to optimize their optical, neural and behavioral target visualization solutions. Indeed, even if evolutionarily distant insects display different pursuit strategies, target neuron physiology is strikingly similar. Furthermore, the coarse spatial resolution of the insect compound eye might actually be beneficial when it comes to detection of moving targets. In conclusion, tiny insects show higher than expected performance in target visualization tasks. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Detection of Emotional Faces: Salient Physical Features Guide Effective Visual Search
ERIC Educational Resources Information Center
Calvo, Manuel G.; Nummenmaa, Lauri
2008-01-01
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent,…
High or Low Target Prevalence Increases the Dual-Target Cost in Visual Search
ERIC Educational Resources Information Center
Menneer, Tamaryn; Donnelly, Nick; Godwin, Hayward J.; Cave, Kyle R.
2010-01-01
Previous studies have demonstrated a dual-target cost in visual search. In the current study, the relationship between search for one and search for two targets was investigated to examine the effects of target prevalence and practice. Color-shape conjunction stimuli were used with response time, accuracy and signal detection measures. Performance…
Proposed New Vision Standards for the 1980’s and Beyond: Contrast Sensitivity
1981-09-01
spatial frequency, visual acuity, target aquistion, visual filters, spatial filtering, target detection, recognitio identification, eye charts, workload...visual standards, as well as other performance criteria, are required to be thown relevant to "real-world" performance before acceptance. On the sur- face
Infrared dim and small target detecting and tracking method inspired by Human Visual System
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Shen, Lurong; Bai, Shengjian
2014-01-01
Detecting and tracking dim and small target in infrared images and videos is one of the most important techniques in many computer vision applications, such as video surveillance and infrared imaging precise guidance. Recently, more and more algorithms based on Human Visual System (HVS) have been proposed to detect and track the infrared dim and small target. In general, HVS concerns at least three mechanisms including contrast mechanism, visual attention and eye movement. However, most of the existing algorithms simulate only a single one of the HVS mechanisms, resulting in many drawbacks of these algorithms. A novel method which combines the three mechanisms of HVS is proposed in this paper. First, a group of Difference of Gaussians (DOG) filters which simulate the contrast mechanism are used to filter the input image. Second, a visual attention, which is simulated by a Gaussian window, is added at a point near the target in order to further enhance the dim small target. This point is named as the attention point. Eventually, the Proportional-Integral-Derivative (PID) algorithm is first introduced to predict the attention point of the next frame of an image which simulates the eye movement of human being. Experimental results of infrared images with different types of backgrounds demonstrate the high efficiency and accuracy of the proposed method to detect and track the dim and small targets.
Detection of visual events along the apparent motion trace in patients with paranoid schizophrenia.
Sanders, Lia Lira Olivier; Muckli, Lars; de Millas, Walter; Lautenschlager, Marion; Heinz, Andreas; Kathmann, Norbert; Sterzer, Philipp
2012-07-30
Dysfunctional prediction in sensory processing has been suggested as a possible causal mechanism in the development of delusions in patients with schizophrenia. Previous studies in healthy subjects have shown that while the perception of apparent motion can mask visual events along the illusory motion trace, such motion masking is reduced when events are spatio-temporally compatible with the illusion, and, therefore, predictable. Here we tested the hypothesis that this specific detection advantage for predictable target stimuli on the apparent motion trace is reduced in patients with paranoid schizophrenia. Our data show that, although target detection along the illusory motion trace is generally impaired, both patients and healthy control participants detect predictable targets more often than unpredictable targets. Patients had a stronger motion masking effect when compared to controls. However, patients showed the same advantage in the detection of predictable targets as healthy control subjects. Our findings reveal stronger motion masking but intact prediction of visual events along the apparent motion trace in patients with paranoid schizophrenia and suggest that the sensory prediction mechanism underlying apparent motion is not impaired in paranoid schizophrenia. Copyright © 2012. Published by Elsevier Ireland Ltd.
Mannion, Damien J; Donkin, Chris; Whitford, Thomas J
2017-01-01
We investigated the relationship between psychometrically-defined schizotypy and the ability to detect a visual target pattern. Target detection is typically impaired by a surrounding pattern (context) with an orientation that is parallel to the target, relative to a surrounding pattern with an orientation that is orthogonal to the target (orientation-dependent contextual modulation). Based on reports that this effect is reduced in those with schizophrenia, we hypothesised that there would be a negative relationship between the relative score on psychometrically-defined schizotypy and the relative effect of orientation-dependent contextual modulation. We measured visual contrast detection thresholds and scores on the Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE) from a non-clinical sample ( N = 100). Contrary to our hypothesis, we find an absence of a monotonic relationship between the relative magnitude of orientation-dependent contextual modulation of visual contrast detection and the relative score on any of the subscales of the O-LIFE. The apparent difference of this result with previous reports on those with schizophrenia suggests that orientation-dependent contextual modulation may be an informative condition in which schizophrenia and psychometrically-defined schizotypy are dissociated. However, further research is also required to clarify the strength of orientation-dependent contextual modulation in those with schizophrenia.
Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection
Lupyan, Gary; Spivey, Michael J.
2010-01-01
Background Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Methodology/Principal Findings Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Conclusions/Significance Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception. PMID:20628646
A bottom-up model of spatial attention predicts human error patterns in rapid scene recognition.
Einhäuser, Wolfgang; Mundhenk, T Nathan; Baldi, Pierre; Koch, Christof; Itti, Laurent
2007-07-20
Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom-up visual processing (attentional selection and/or recognition) or top-down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of "surprise" in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences.
Insect Detection of Small Targets Moving in Visual Clutter
Barnett, Paul D; O'Carroll, David C
2006-01-01
Detection of targets that move within visual clutter is a common task for animals searching for prey or conspecifics, a task made even more difficult when a moving pursuer needs to analyze targets against the motion of background texture (clutter). Despite the limited optical acuity of the compound eye of insects, this challenging task seems to have been solved by their tiny visual system. Here we describe neurons found in the male hoverfly,Eristalis tenax, that respond selectively to small moving targets. Although many of these target neurons are inhibited by the motion of a background pattern, others respond to target motion within the receptive field under a surprisingly large range of background motion stimuli. Some neurons respond whether or not there is a speed differential between target and background. Analysis of responses to very small targets (smaller than the size of the visual field of single photoreceptors) or those targets with reduced contrast shows that these neurons have extraordinarily high contrast sensitivity. Our data suggest that rejection of background motion may result from extreme selectivity for small targets contrasting against local patches of the background, combined with this high sensitivity, such that background patterns rarely contain features that satisfactorily drive the neuron. PMID:16448249
Auditory enhancement of visual perception at threshold depends on visual abilities.
Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène
2011-06-17
Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.
Kamke, Marc R; Van Luyn, Jeanette; Constantinescu, Gabriella; Harris, Jill
2014-01-01
Evidence suggests that deafness-induced changes in visual perception, cognition and attention may compensate for a hearing loss. Such alterations, however, may also negatively influence adaptation to a cochlear implant. This study investigated whether involuntary attentional capture by salient visual stimuli is altered in children who use a cochlear implant. Thirteen experienced implant users (aged 8-16 years) and age-matched normally hearing children were presented with a rapid sequence of simultaneous visual and auditory events. Participants were tasked with detecting numbers presented in a specified color and identifying a change in the tonal frequency whilst ignoring irrelevant visual distractors. Compared to visual distractors that did not possess the target-defining characteristic, target-colored distractors were associated with a decrement in visual performance (response time and accuracy), demonstrating a contingent capture of involuntary attention. Visual distractors did not, however, impair auditory task performance. Importantly, detection performance for the visual and auditory targets did not differ between the groups. These results suggest that proficient cochlear implant users demonstrate normal capture of visuospatial attention by stimuli that match top-down control settings.
Yokoi, Isao; Komatsu, Hidehiko
2010-09-01
Visual grouping of discrete elements is an important function for object recognition. We recently conducted an experiment to study neural correlates of visual grouping. We recorded neuronal activities while monkeys performed a grouping detection task in which they discriminated visual patterns composed of discrete dots arranged in a cross and detected targets in which dots with the same contrast were aligned horizontally or vertically. We found that some neurons in the lateral bank of the intraparietal sulcus exhibit activity related to visual grouping. In the present study, we analyzed how different types of neurons contribute to visual grouping. We classified the recorded neurons as putative pyramidal neurons or putative interneurons, depending on the duration of their action potentials. We found that putative pyramidal neurons exhibited selectivity for the orientation of the target, and this selectivity was enhanced by attention to a particular target orientation. By contrast, putative interneurons responded more strongly to the target stimuli than to the nontargets, regardless of the orientation of the target. These results suggest that different classes of parietal neurons contribute differently to the grouping of discrete elements.
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2013-02-01
A common search paradigm requires observers to search for a target among undivided spatial arrays of many items. Yet our visual environment is populated with items that are typically arranged within smaller (subdivided) spatial areas outlined by dividers (e.g., frames). It remains unclear how dividers impact visual search performance. In this study, we manipulated the presence and absence of frames and the number of frames subdividing search displays. Observers searched for a target O among Cs, a typically inefficient search task, and for a target C among Os, a typically efficient search. The results indicated that the presence of divider frames in a search display initially interferes with visual search tasks when targets are quickly detected (i.e., efficient search), leading to early interference; conversely, frames later facilitate visual search in tasks in which targets take longer to detect (i.e., inefficient search), leading to late facilitation. Such interference and facilitation appear only for conditions with a specific number of frames. Relative to previous studies of grouping (due to item proximity or similarity), these findings suggest that frame enclosures of multiple items may induce a grouping effect that influences search performance.
Nummenmaa, Lauri; Hietanen, Jari K.; Calvo, Manuel G.; Hyönä, Jukka
2011-01-01
An organism's survival depends crucially on its ability to detect and acquire nutriment. Attention circuits interact with cognitive and motivational systems to facilitate detection of salient sensory events in the environment. Here we show that the human attentional system is tuned to detect food targets among nonfood items. In two visual search experiments participants searched for discrepant food targets embedded in an array of nonfood distracters or vice versa. Detection times were faster when targets were food rather than nonfood items, and the detection advantage for food items showed a significant negative correlation with Body Mass Index (BMI). Also, eye tracking during searching within arrays of visually homogenous food and nonfood targets demonstrated that the BMI-contingent attentional bias was due to rapid capturing of the eyes by food items in individuals with low BMI. However, BMI was not associated with decision times after the discrepant food item was fixated. The results suggest that visual attention is biased towards foods, and that individual differences in energy consumption - as indexed by BMI - are associated with differential attentional effects related to foods. We speculate that such differences may constitute an important risk factor for gaining weight. PMID:21603657
[Eccentricity-dependent influence of amodal completion on visual search].
Shirama, Aya; Ishiguchi, Akira
2009-06-01
Does amodal completion occur homogeneously across the visual field? Rensink and Enns (1998) found that visual search for efficiently-detected fragments became inefficient when observers perceived the fragments as a partially-occluded version of a distractor due to a rapid completion process. We examined the effect of target eccentricity in Rensink and Enns's tasks and a few additional tasks by magnifying the stimuli in the peripheral visual field to compensate for the loss of spatial resolution (M-scaling; Rovamo & Virsu, 1979). We found that amodal completion disrupted the efficient search for the salient fragments (i.e., target) even when the target was presented at high eccentricity (within 17 deg). In addition, the configuration effect of the fragments, which produced amodal completion, increased with eccentricity while the same target was detected efficiently at the lowest eccentricity. This eccentricity effect is different from a previously-reported eccentricity effect where M-scaling was effective (Carrasco & Frieder, 1997). These findings indicate that the visual system has a basis for rapid completion across the visual field, but the stimulus representations constructed through amodal completion have eccentricity-dependent properties.
Detecting and Remembering Simultaneous Pictures in a Rapid Serial Visual Presentation
ERIC Educational Resources Information Center
Potter, Mary C.; Fox, Laura F.
2009-01-01
Viewers can easily spot a target picture in a rapid serial visual presentation (RSVP), but can they do so if more than 1 picture is presented simultaneously? Up to 4 pictures were presented on each RSVP frame, for 240 to 720 ms/frame. In a detection task, the target was verbally specified before each trial (e.g., "man with violin"); in a…
Visual-Vestibular Conflict Detection Depends on Fixation.
Garzorz, Isabelle T; MacNeilage, Paul R
2017-09-25
Visual and vestibular signals are the primary sources of sensory information for self-motion. Conflict among these signals can be seriously debilitating, resulting in vertigo [1], inappropriate postural responses [2], and motion, simulator, or cyber sickness [3-8]. Despite this significance, the mechanisms mediating conflict detection are poorly understood. Here we model conflict detection simply as crossmodal discrimination with benchmark performance limited by variabilities of the signals being compared. In a series of psychophysical experiments conducted in a virtual reality motion simulator, we measure these variabilities and assess conflict detection relative to this benchmark. We also examine the impact of eye movements on visual-vestibular conflict detection. In one condition, observers fixate a point that is stationary in the simulated visual environment by rotating the eyes opposite head rotation, thereby nulling retinal image motion. In another condition, eye movement is artificially minimized via fixation of a head-fixed fixation point, thereby maximizing retinal image motion. Visual-vestibular integration performance is also measured, similar to previous studies [9-12]. We observe that there is a tradeoff between integration and conflict detection that is mediated by eye movements. Minimizing eye movements by fixating a head-fixed target leads to optimal integration but highly impaired conflict detection. Minimizing retinal motion by fixating a scene-fixed target improves conflict detection at the cost of impaired integration performance. The common tendency to fixate scene-fixed targets during self-motion [13] may indicate that conflict detection is typically a higher priority than the increase in precision of self-motion estimation that is obtained through integration. Copyright © 2017 Elsevier Ltd. All rights reserved.
Image visualization of hyperspectral spectrum for LWIR
NASA Astrophysics Data System (ADS)
Chong, Eugene; Jeong, Young-Su; Lee, Jai-Hoon; Park, Dong Jo; Kim, Ju Hyun
2015-07-01
The image visualization of a real-time hyperspectral spectrum in the long-wave infrared (LWIR) range of 900-1450 cm-1 by a color-matching function is addressed. It is well known that the absorption spectra of main toxic industrial chemical (TIC) and chemical warfare agent (CWA) clouds are detected in this spectral region. Furthermore, a significant spectral peak due to various background species and unknown targets are also present. However, those are dismissed as noise, resulting in utilization limit. Herein, we applied a color-matching function that uses the information from hyperspectral data, which is emitted from the materials and surfaces of artificial or natural backgrounds in the LWIR region. This information was used to classify and differentiate the background signals from the targeted substances, and the results were visualized as image data without additional visual equipment. The tristimulus value based visualization information can quickly identify the background species and target in real-time detection in LWIR.
Traffic Signs in Complex Visual Environments
DOT National Transportation Integrated Search
1982-11-01
The effects of sign luminance on detection and recognition of traffic control devices is mediated through contrast with the immediate surround. Additionally, complex visual scenes are known to degrade visual performance with targets well above visual...
Gutierrez, Eduardo de A; Pessoa, Valdir F; Aguiar, Ludmilla M S; Pessoa, Daniel M A
2014-11-01
Bats are known for their well-developed echolocation. However, several experiments focused on the bat visual system have shown evidence of the importance of visual cues under specific luminosity for different aspects of bat biology, including foraging behavior. This study examined the foraging abilities of five female great fruit-eating bats, Artibeus lituratus, under different light intensities. Animals were given a series of tasks to test for discrimination between a food target against an inedible background, under light levels similar to the twilight illumination (18lx), the full moon (2lx) and complete darkness (0lx). We found that the bats required a longer time frame to detect targets under a light intensity similar to twilight, possibly due to inhibitory effects present under a more intense light level. Additionally, bats were more efficient at detecting and capturing targets under light conditions similar to the luminosity of a full moon, suggesting that visual cues were important for target discrimination. These results demonstrate that light intensity affects foraging behavior and enables the use of visual cues for food detection in frugivorous bats. This article is part of a Special Issue entitled: Neotropical Behaviour. Copyright © 2014 Elsevier B.V. All rights reserved.
Target-responsive DNAzyme cross-linked hydrogel for visual quantitative detection of lead.
Huang, Yishun; Ma, Yanli; Chen, Yahong; Wu, Xuemeng; Fang, Luting; Zhu, Zhi; Yang, Chaoyong James
2014-11-18
Because of the severe health risks associated with lead pollution, rapid, sensitive, and portable detection of low levels of Pb(2+) in biological and environmental samples is of great importance. In this work, a Pb(2+)-responsive hydrogel was prepared using a DNAzyme and its substrate as cross-linker for rapid, sensitive, portable, and quantitative detection of Pb(2+). Gold nanoparticles (AuNPs) were first encapsulated in the hydrogel as an indicator for colorimetric analysis. In the absence of lead, the DNAzyme is inactive, and the substrate cross-linker maintains the hydrogel in the gel form. In contrast, the presence of lead activates the DNAzyme to cleave the substrate, decreasing the cross-linking density of the hydrogel and resulting in dissolution of the hydrogel and release of AuNPs for visual detection. As low as 10 nM Pb(2+) can be detected by the naked eye. Furthermore, to realize quantitative visual detection, a volumetric bar-chart chip (V-chip) was used for quantitative readout of the hydrogel system by replacing AuNPs with gold-platinum core-shell nanoparticles (Au@PtNPs). The Au@PtNPs released from the hydrogel upon target activation can efficiently catalyze the decomposition of H2O2 to generate a large volume of O2. The gas pressure moves an ink bar in the V-chip for portable visual quantitative detection of lead with a detection limit less than 5 nM. The device was able to detect lead in digested blood with excellent accuracy. The method developed can be used for portable lead quantitation in many applications. Furthermore, the method can be further extended to portable visual quantitative detection of a variety of targets by replacing the lead-responsive DNAzyme with other DNAzymes.
Air-To-Air Visual Target Acquisition Pilot Interview Survey.
1979-01-01
8217top’ 5 p~lots in air-tu-air visual target acqui- sition in your squadron," would/could you do it? yes no Comment : 2. Is the term "acquisition" as...meaningful as "spotting" and "seeing" in 1he con- text of visually detecting a "bogey" or another aircraft? yes no Comment : 3. Would/could you rank all...squadron pilots on the basis of their visual target acquisition capability? yes no Comment : 4. Is there a minimum number of observations requi.red for
Attending to unrelated targets boosts short-term memory for color arrays.
Makovski, Tal; Swallow, Khena M; Jiang, Yuhong V
2011-05-01
Detecting a target typically impairs performance in a second, unrelated task. It has been recently reported however, that detecting a target in a stream of distractors can enhance long-term memory of faces and scenes that were presented concurrently with the target (the attentional boost effect). In this study we ask whether target detection also enhances performance in a visual short-term memory task, where capacity limits are severe. Participants performed two tasks at once: a one shot, color change detection task and a letter-detection task. In Experiment 1, a central letter appeared at the same time as 3 or 5 color patches (memory display). Participants encoded the colors and pressed the spacebar if the letter was a T (target). After a short retention interval, a probe display of color patches appeared. Performance on the change detection task was enhanced when a target, rather than a distractor, appeared with the memory display. This effect was not modulated by memory load or the frequency of trials in which a target appeared. However, there was no enhancement when the target appeared at the same time as the probe display (Experiment 2a) or during the memory retention interval (Experiment 2b). Together these results suggest that detecting a target facilitates the encoding of unrelated information into visual short-term memory. Copyright © 2010 Elsevier Ltd. All rights reserved.
Nakashima, Ryoichi; Watanabe, Chisaki; Maeda, Eriko; Yoshikawa, Takeharu; Matsuda, Izuru; Miki, Soichiro; Yokosawa, Kazuhiko
2015-09-01
How does domain-specific knowledge influence the experts' performance in their domain of expertise? Specifically, can visual search experts find, with uniform efficiency, any type of target in their domain of expertise? We examined whether acquired knowledge of target importance influences an expert's visual search performance. In some professional searches (e.g., medical screenings), certain targets are rare; one aim of this study was to examine the extent to which experts miss such targets in their searches. In one experiment, radiologists (medical experts) engaged in a medical lesion search task in which both the importance (i.e., seriousness/gravity) and the prevalence of targets varied. Results showed decreased target detection rates in the low prevalence conditions (i.e., the prevalence effect). Also, experts were better at detecting important (versus unimportant) lesions. Results of an experiment using novices ruled out the possibility that decreased performance with unimportant targets was due to low target noticeability/visibility. Overall, the findings suggest that radiologists do not have a generalized ability to detect any type of lesion; instead, they have acquired a specialized ability to detect only those important lesions relevant for effective medical practices.
Picture Detection in Rapid Serial Visual Presentation: Features or Identity?
ERIC Educational Resources Information Center
Potter, Mary C.; Wyble, Brad; Pandav, Rijuta; Olejarczyk, Jennifer
2010-01-01
A pictured object can be readily detected in a rapid serial visual presentation sequence when the target is specified by a superordinate category name such as "animal" or "vehicle". Are category features the initial basis for detection, with identification of the specific object occurring in a second stage (Evans &…
Cholinergic Modulation of Frontoparietal Cortical Network Dynamics Supporting Supramodal Attention.
Ljubojevic, Vladimir; Luu, Paul; Gill, Patrick Robert; Beckett, Lee-Anne; Takehara-Nishiuchi, Kaori; De Rosa, Eve
2018-04-18
A critical function of attention is to support a state of readiness to enhance stimulus detection, independent of stimulus modality. The nucleus basalis magnocellularis (NBM) is the major source of the neurochemical acetylcholine (ACh) for frontoparietal cortical networks thought to support attention. We examined a potential supramodal role of ACh in a frontoparietal cortical attentional network supporting target detection. We recorded local field potentials (LFPs) in the prelimbic frontal cortex (PFC) and the posterior parietal cortex (PPC) to assess whether ACh contributed to a state of readiness to alert rats to an impending presentation of visual or olfactory targets in one of five locations. Twenty male Long-Evans rats underwent training and then lesions of the NBM using the selective cholinergic immunotoxin 192 IgG-saporin (0.3 μg/μl; ACh-NBM-lesion) to reduce cholinergic afferentation of the cortical mantle. Postsurgery, ACh-NBM-lesioned rats had less correct responses and more omissions than sham-lesioned rats, which changed parametrically as we increased the attentional demands of the task with decreased target duration. This parametric deficit was found equally for both sensory targets. Accurate detection of visual and olfactory targets was associated specifically with increased LFP coherence, in the beta range, between the PFC and PPC, and with increased beta power in the PPC before the target's appearance in sham-lesioned rats. Readiness-associated changes in brain activity and visual and olfactory target detection were attenuated in the ACh-NBM-lesioned group. Accordingly, ACh may support supramodal attention via modulating activity in a frontoparietal cortical network, orchestrating a state of readiness to enhance target detection. SIGNIFICANCE STATEMENT We examined whether the neurochemical acetylcholine (ACh) contributes to a state of readiness for target detection, by engaging frontoparietal cortical attentional networks independent of modality. We show that ACh supported alerting attention to an impending presentation of either visual or olfactory targets. Using local field potentials, enhanced stimulus detection was associated with an anticipatory increase in power in the beta oscillation range before the target's appearance within the posterior parietal cortex (PPC) as well as increased synchrony, also in beta, between the prefrontal cortex and PPC. These readiness-associated changes in brain activity and behavior were attenuated in rats with reduced cortical ACh. Thus, ACh may act, in a supramodal manner, to prepare frontoparietal cortical attentional networks for target detection. Copyright © 2018 the authors 0270-6474/18/383988-18$15.00/0.
Chanes, Lorena; Chica, Ana B.; Quentin, Romain; Valero-Cabré, Antoni
2012-01-01
The right Frontal Eye Field (FEF) is a region of the human brain, which has been consistently involved in visuo-spatial attention and access to consciousness. Nonetheless, the extent of this cortical site’s ability to influence specific aspects of visual performance remains debated. We hereby manipulated pre-target activity on the right FEF and explored its influence on the detection and categorization of low-contrast near-threshold visual stimuli. Our data show that pre-target frontal neurostimulation has the potential when used alone to induce enhancements of conscious visual detection. More interestingly, when FEF stimulation was combined with visuo-spatial cues, improvements remained present only for trials in which the cue correctly predicted the location of the subsequent target. Our data provide evidence for the causal role of the right FEF pre-target activity in the modulation of human conscious vision and reveal the dependence of such neurostimulatory effects on the state of activity set up by cue validity in the dorsal attentional orienting network. PMID:22615759
Asymmetries in visual search for conjunctive targets.
Cohen, A
1993-08-01
Asymmetry is demonstrated between conjunctive targets in visual search with no detectable asymmetries between the individual features that compose these targets. Experiment 1 demonstrated this phenomenon for targets composed of color and shape. Experiment 2 and 4 demonstrate this asymmetry for targets composed of size and orientation and for targets composed of contrast level and orientation, respectively. Experiment 3 demonstrates that search rate of individual features cannot predict search rate for conjunctive targets. These results demonstrate the need for 2 levels of representations: one of features and one of conjunction of features. A model related to the modified feature integration theory is proposed to account for these results. The proposed model and other models of visual search are discussed.
Colour and spatial cueing in low-prevalence visual search.
Russell, Nicholas C C; Kunar, Melina A
2012-01-01
In visual search, 30-40% of targets with a prevalence rate of 2% are missed, compared to 7% of targets with a prevalence rate of 50% (Wolfe, Horowitz, & Kenner, 2005). This "low-prevalence" (LP) effect is thought to occur as participants are making motor errors, changing their response criteria, and/or quitting their search too soon. We investigate whether colour and spatial cues, known to improve visual search when the target has a high prevalence (HP), benefit search when the target is rare. Experiments 1 and 2 showed that although knowledge of the target's colour reduces miss errors overall, it does not eliminate the LP effect as more targets were missed at LP than at HP. Furthermore, detection of a rare target is significantly impaired if it appears in an unexpected colour-more so than if the prevalence of the target is high (Experiment 2). Experiment 3 showed that, if a rare target is exogenously cued, target detection is improved but still impaired relative to high-prevalence conditions. Furthermore, if the cue is absent or invalid, the percentage of missed targets increases. Participants were given the option to correct motor errors in all three experiments, which reduced but did not eliminate the LP effect. The results suggest that although valid colour and spatial cues improve target detection, participants still miss more targets at LP than at HP. Furthermore, invalid cues at LP are very costly in terms of miss errors. We discuss our findings in relation to current theories and applications of LP search.
Location cue validity affects inhibition of return of visual processing.
Wright, R D; Richard, C M
2000-01-01
Inhibition-of-return is the process by which visual search for an object positioned among others is biased toward novel rather than previously inspected items. It is thought to occur automatically and to increase search efficiency. We examined this phenomenon by studying the facilitative and inhibitory effects of location cueing on target-detection response times in a search task. The results indicated that facilitation was a reflexive consequence of cueing whereas inhibition appeared to depend on cue informativeness. More specifically, the inhibition-of-return effect occurred only when the cue provided no information about the impending target's location. We suggest that the results are consistent with the notion of two levels of visual processing. The first involves rapid and reflexive operations that underlie the facilitative effects of location cueing on target detection. The second involves a rapid but goal-driven inhibition procedure that the perceiver can invoke if doing so will enhance visual search performance.
Nagy, Helga; Bencsik, Krisztina; Rajda, Cecília; Benedek, Krisztina; Janáky, Márta; Beniczky, Sándor; Kéri, Szabolcs; Vécsei, László
2007-06-01
Visual impairment is a common feature of multiple sclerosis. The aim of this study was to investigate lateral interactions in the visual cortex of highly functioning patients with multiple sclerosis and to compare that with basic visual and neuropsychologic functions. Twenty-two young, visually unimpaired multiple sclerosis patients with minimal symptoms (Expanded Disability Status Scale <2) and 30 healthy controls subjects participated in the study. Lateral interactions were investigated with the flanker task, during which participants were asked to detect the orientation of a low-contrast Gabor patch (vertical or horizontal), flanked with 2 collinear or orthogonal Gabor patches. Stimulus exposure time was 40, 60, 80, and 100 ms. Digit span forward/backward, digit symbol, verbal fluency, and California Verbal Learning Test procedures were used for background neuropsychologic assessment. Results revealed that patients with multiple sclerosis showed intact visual contrast sensitivity and neuropsychologic functions, whereas orientation detection in the orthogonal condition was significantly impaired. At 40-ms exposure time, collinear flankers facilitated the orientation detection performance of the patients resulting in normal performance. In conclusion, the detection of briefly presented, low-contrast visual stimuli was selectively impaired in multiple sclerosis. Lateral interactions between target and flankers robustly facilitated target detection in the patient group.
The Role of Motor Learning in Spatial Adaptation near a Tool
Brown, Liana E.; Doole, Robert; Malfait, Nicole
2011-01-01
Some visual-tactile (bimodal) cells have visual receptive fields (vRFs) that overlap and extend moderately beyond the skin of the hand. Neurophysiological evidence suggests, however, that a vRF will grow to encompass a hand-held tool following active tool use but not after passive holding. Why does active tool use, and not passive holding, lead to spatial adaptation near a tool? We asked whether spatial adaptation could be the result of motor or visual experience with the tool, and we distinguished between these alternatives by isolating motor from visual experience with the tool. Participants learned to use a novel, weighted tool. The active training group received both motor and visual experience with the tool, the passive training group received visual experience with the tool, but no motor experience, and finally, a no-training control group received neither visual nor motor experience using the tool. After training, we used a cueing paradigm to measure how quickly participants detected targets, varying whether the tool was placed near or far from the target display. Only the active training group detected targets more quickly when the tool was placed near, rather than far, from the target display. This effect of tool location was not present for either the passive-training or control groups. These results suggest that motor learning influences how visual space around the tool is represented. PMID:22174944
ERIC Educational Resources Information Center
Berger, Carole; Valdois, Sylviane; Lallier, Marie; Donnadieu, Sophie
2015-01-01
The present study explored the temporal allocation of attention in groups of 8-year-old children, 10-year-old children, and adults performing a rapid serial visual presentation task. In a dual-condition task, participants had to detect a briefly presented target (T2) after identifying an initial target (T1) embedded in a random series of…
Wallace, Deanna L.
2017-01-01
The neuromodulator acetylcholine modulates spatial integration in visual cortex by altering the balance of inputs that generate neuronal receptive fields. These cholinergic effects may provide a neurobiological mechanism underlying the modulation of visual representations by visual spatial attention. However, the consequences of cholinergic enhancement on visuospatial perception in humans are unknown. We conducted two experiments to test whether enhancing cholinergic signaling selectively alters perceptual measures of visuospatial interactions in human subjects. In Experiment 1, a double-blind placebo-controlled pharmacology study, we measured how flanking distractors influenced detection of a small contrast decrement of a peripheral target, as a function of target-flanker distance. We found that cholinergic enhancement with the cholinesterase inhibitor donepezil improved target detection, and modeling suggested that this was mainly due to a narrowing of the extent of facilitatory perceptual spatial interactions. In Experiment 2, we tested whether these effects were selective to the cholinergic system or would also be observed following enhancements of related neuromodulators dopamine or norepinephrine. Unlike cholinergic enhancement, dopamine (bromocriptine) and norepinephrine (guanfacine) manipulations did not improve performance or systematically alter the spatial profile of perceptual interactions between targets and distractors. These findings reveal mechanisms by which cholinergic signaling influences visual spatial interactions in perception and improves processing of a visual target among distractors, effects that are notably similar to those of spatial selective attention. SIGNIFICANCE STATEMENT Acetylcholine influences how visual cortical neurons integrate signals across space, perhaps providing a neurobiological mechanism for the effects of visual selective attention. However, the influence of cholinergic enhancement on visuospatial perception remains unknown. Here we demonstrate that cholinergic enhancement improves detection of a target flanked by distractors, consistent with sharpened visuospatial perceptual representations. Furthermore, whereas most pharmacological studies focus on a single neurotransmitter, many neuromodulators can have related effects on cognition and perception. Thus, we also demonstrate that enhancing noradrenergic and dopaminergic systems does not systematically improve visuospatial perception or alter its tuning. Our results link visuospatial tuning effects of acetylcholine at the neuronal and perceptual levels and provide insights into the connection between cholinergic signaling and visual attention. PMID:28336568
Integrating visual learning within a model-based ATR system
NASA Astrophysics Data System (ADS)
Carlotto, Mark; Nebrich, Mark
2017-05-01
Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.
NASA Technical Reports Server (NTRS)
Johnson, Walter W.; Liao, Min-Ju; Granada, Stacie
2003-01-01
This study investigated visual search performance for target aircraft symbols on a Cockpit Display of Traffic Information (CDTI). Of primary interest was the influence of target brightness (intensity) and highlighting validity (search directions) on the ability to detect a target aircraft among distractor aircraft. Target aircraft were distinguished by an airspace course that conflicted with Ownship (that is, the participant's aircraft). The display could present all (homogeneous) bright aircraft, all (homogeneous) dim aircraft, or mixed bright and dim aircraft, with the target aircraft being either bright or dim. In the mixed intensity condition, participants may or may not have been instructed whether the target was bright or dim. Results indicated that highlighting validity facilitated better detection times. However, instead of bright targets being detected faster, dim targets were found to be detected more slowly in the mixed intensity display than in the homogeneous display. This relative slowness may be due to a delay in confirming the dim aircraft to be a target when it it was among brighter distractor aircraft. This hypothesis will be tested in future research. Funding for this work was provided by the Advanced Air Transportation Technologies Project of NASA's Airspace Operation Systems Program.
Eramudugolla, Ranmalee; Mattingley, Jason B
2008-01-01
Patients with unilateral spatial neglect following right hemisphere damage are impaired in detecting contralesional targets in both visual and haptic search tasks, and often show a graded improvement in detection performance for more ipsilesional spatial locations. In audition, multiple simultaneous sounds are most effectively perceived if they are distributed along the frequency dimension. Thus, attention to spectro-temporal features alone can allow detection of a target sound amongst multiple simultaneous distracter sounds, regardless of whether these sounds are spatially separated. Spatial bias in attention associated with neglect should not affect auditory search based on spectro-temporal features of a sound target. We report that a right brain damaged patient with neglect demonstrated a significant gradient favouring the ipsilesional side on a visual search task as well as an auditory search task in which the target was a frequency modulated tone amongst steady distractor tones. No such asymmetry was apparent in the auditory search performance of a control patient with a right hemisphere lesion but no neglect. The results suggest that the spatial bias in attention exhibited by neglect patients affects stimulus processing even when spatial information is irrelevant to the task.
Quétard, Boris; Quinton, Jean-Charles; Colomb, Michèle; Pezzulo, Giovanni; Barca, Laura; Izaute, Marie; Appadoo, Owen Kevin; Mermillod, Martial
2015-09-01
Detecting a pedestrian while driving in the fog is one situation where the prior expectation about the target presence is integrated with the noisy visual input. We focus on how these sources of information influence the oculomotor behavior and are integrated within an underlying decision-making process. The participants had to judge whether high-/low-density fog scenes displayed on a computer screen contained a pedestrian or a deer by executing a mouse movement toward the response button (mouse-tracking). A variable road sign was added on the scene to manipulate expectations about target identity. We then analyzed the timing and amplitude of the deviation of mouse trajectories toward the incorrect response and, using an eye tracker, the detection time (before fixating the target) and the identification time (fixations on the target). Results revealed that expectation of the correct target results in earlier decisions with less deviation toward the alternative response, this effect being partially explained by the facilitation of target identification.
The Effects of Spatial Endogenous Pre-cueing across Eccentricities
Feng, Jing; Spence, Ian
2017-01-01
Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants’ ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field. PMID:28638353
The Effects of Spatial Endogenous Pre-cueing across Eccentricities.
Feng, Jing; Spence, Ian
2017-01-01
Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants' ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field.
Encoding of Target Detection during Visual Search by Single Neurons in the Human Brain.
Wang, Shuo; Mamelak, Adam N; Adolphs, Ralph; Rutishauser, Ueli
2018-06-08
Neurons in the primate medial temporal lobe (MTL) respond selectively to visual categories such as faces, contributing to how the brain represents stimulus meaning. However, it remains unknown whether MTL neurons continue to encode stimulus meaning when it changes flexibly as a function of variable task demands imposed by goal-directed behavior. While classically associated with long-term memory, recent lesion and neuroimaging studies show that the MTL also contributes critically to the online guidance of goal-directed behaviors such as visual search. Do such tasks modulate responses of neurons in the MTL, and if so, do their responses mirror bottom-up input from visual cortices or do they reflect more abstract goal-directed properties? To answer these questions, we performed concurrent recordings of eye movements and single neurons in the MTL and medial frontal cortex (MFC) in human neurosurgical patients performing a memory-guided visual search task. We identified a distinct population of target-selective neurons in both the MTL and MFC whose response signaled whether the currently fixated stimulus was a target or distractor. This target-selective response was invariant to visual category and predicted whether a target was detected or missed behaviorally during a given fixation. The response latencies, relative to fixation onset, of MFC target-selective neurons preceded those in the MTL by ∼200 ms, suggesting a frontal origin for the target signal. The human MTL thus represents not only fixed stimulus identity, but also task-specified stimulus relevance due to top-down goal relevance. Copyright © 2018 Elsevier Ltd. All rights reserved.
Evaluation of camouflage effectiveness using hyperspectral images
NASA Astrophysics Data System (ADS)
Zavvartorbati, Ahmad; Dehghani, Hamid; Rashidi, Ali Jabar
2017-10-01
Recent advances in camouflage engineering have made it more difficult to detect targets. Assessing the effectiveness of camouflage against different target detection methods leads to identifying the strengths and weaknesses of camouflage designs. One of the target detection methods is to analyze the content of the scene using remote sensing hyperspectral images. In the process of evaluating camouflage designs, there must be comprehensive and efficient evaluation criteria. Three parameters were considered as the main factors affecting the target detection and based on these factors, camouflage effectiveness assessment criteria were proposed. To combine the criteria in the form of a single equation, the equation used in target visual search models was employed and for determining the criteria, a model was presented based on the structure of the computational visual attention systems. Also, in software implementations on the HyMap hyperspectral image, a variety of camouflage levels were created for the real targets in the image. Assessing the camouflage levels using the proposed criteria, comparing and analyzing the results can show that the provided criteria and model are effective for the evaluation of camouflage designs using hyperspectral images.
2004-11-01
affords exciting opportunities in target detection. The input signal may be a sum of sine waves, it could be an auditory signal, or possibly a visual...rendering of a scene. Since image processing is an area in which the original data are stationary in some sense ( auditory signals suffer from...11 Example 1 of SR - Identification of a Subliminal Signal below a Threshold .......................... 13 Example 2 of SR
Vos, Leia; Whitman, Douglas
2014-01-01
A considerable literature suggests that the right hemisphere is dominant in vigilance for novel and survival-related stimuli, such as predators, across a wide range of species. In contrast to vigilance for change, change blindness is a failure to detect obvious changes in a visual scene when they are obscured by a disruption in scene presentation. We studied lateralised change detection using a series of scenes with salient changes in either the left or right visual fields. In Study 1 left visual field changes were detected more rapidly than right visual field changes, confirming a right hemisphere advantage for change detection. Increasing stimulus difficulty resulted in greater right visual field detections and left hemisphere detection was more likely when change occurred in the right visual field on a prior trial. In Study 2 an intervening distractor task disrupted the influence of prior trials. Again, faster detection speeds were observed for the left visual field changes with a shift to a right visual field advantage with increasing time-to-detection. This suggests that a right hemisphere role for vigilance, or catching attention, and a left hemisphere role for target evaluation, or maintaining attention, is present at the earliest stage of change detection.
Control system of hexacopter using color histogram footprint and convolutional neural network
NASA Astrophysics Data System (ADS)
Ruliputra, R. N.; Darma, S.
2017-07-01
The development of unmanned aerial vehicles (UAV) has been growing rapidly in recent years. The use of logic thinking which is implemented into the program algorithms is needed to make a smart system. By using visual input from a camera, UAV is able to fly autonomously by detecting a target. However, some weaknesses arose as usage in the outdoor environment might change the target's color intensity. Color histogram footprint overcomes the problem because it divides color intensity into separate bins that make the detection tolerant to the slight change of color intensity. Template matching compare its detection result with a template of the reference image to determine the target position and use it to position the vehicle in the middle of the target with visual feedback control based on Proportional-Integral-Derivative (PID) controller. Color histogram footprint method localizes the target by calculating the back projection of its histogram. It has an average success rate of 77 % from a distance of 1 meter. It can position itself in the middle of the target by using visual feedback control with an average positioning time of 73 seconds. After the hexacopter is in the middle of the target, Convolutional Neural Networks (CNN) classifies a number contained in the target image to determine a task depending on the classified number, either landing, yawing, or return to launch. The recognition result shows an optimum success rate of 99.2 %.
Visual gravitational motion and the vestibular system in humans
Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka
2013-01-01
The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity. PMID:24421761
Visual gravitational motion and the vestibular system in humans.
Lacquaniti, Francesco; Bosco, Gianfranco; Indovina, Iole; La Scaleia, Barbara; Maffei, Vincenzo; Moscatelli, Alessandro; Zago, Myrka
2013-12-26
The visual system is poorly sensitive to arbitrary accelerations, but accurately detects the effects of gravity on a target motion. Here we review behavioral and neuroimaging data about the neural mechanisms for dealing with object motion and egomotion under gravity. The results from several experiments show that the visual estimates of a target motion under gravity depend on the combination of a prior of gravity effects with on-line visual signals on target position and velocity. These estimates are affected by vestibular inputs, and are encoded in a visual-vestibular network whose core regions lie within or around the Sylvian fissure, and are represented by the posterior insula/retroinsula/temporo-parietal junction. This network responds both to target motions coherent with gravity and to vestibular caloric stimulation in human fMRI studies. Transient inactivation of the temporo-parietal junction selectively disrupts the interception of targets accelerated by gravity.
Distance-based microfluidic quantitative detection methods for point-of-care testing.
Tian, Tian; Li, Jiuxing; Song, Yanling; Zhou, Leiji; Zhu, Zhi; Yang, Chaoyong James
2016-04-07
Equipment-free devices with quantitative readout are of great significance to point-of-care testing (POCT), which provides real-time readout to users and is especially important in low-resource settings. Among various equipment-free approaches, distance-based visual quantitative detection methods rely on reading the visual signal length for corresponding target concentrations, thus eliminating the need for sophisticated instruments. The distance-based methods are low-cost, user-friendly and can be integrated into portable analytical devices. Moreover, such methods enable quantitative detection of various targets by the naked eye. In this review, we first introduce the concept and history of distance-based visual quantitative detection methods. Then, we summarize the main methods for translation of molecular signals to distance-based readout and discuss different microfluidic platforms (glass, PDMS, paper and thread) in terms of applications in biomedical diagnostics, food safety monitoring, and environmental analysis. Finally, the potential and future perspectives are discussed.
Ultrafast scene detection and recognition with limited visual information
Hagmann, Carl Erick; Potter, Mary C.
2016-01-01
Humans can detect target color pictures of scenes depicting concepts like picnic or harbor in sequences of six or twelve pictures presented as briefly as 13 ms, even when the target is named after the sequence (Potter, Wyble, Hagmann, & McCourt, 2014). Such rapid detection suggests that feedforward processing alone enabled detection without recurrent cortical feedback. There is debate about whether coarse, global, low spatial frequencies (LSFs) provide predictive information to high cortical levels through the rapid magnocellular (M) projection of the visual path, enabling top-down prediction of possible object identities. To test the “Fast M” hypothesis, we compared detection of a named target across five stimulus conditions: unaltered color, blurred color, grayscale, thresholded monochrome, and LSF pictures. The pictures were presented for 13–80 ms in six-picture rapid serial visual presentation (RSVP) sequences. Blurred, monochrome, and LSF pictures were detected less accurately than normal color or grayscale pictures. When the target was named before the sequence, all picture types except LSF resulted in above-chance detection at all durations. Crucially, when the name was given only after the sequence, performance dropped and the monochrome and LSF pictures (but not the blurred pictures) were at or near chance. Thus, without advance information, monochrome and LSF pictures were rarely understood. The results offer only limited support for the Fast M hypothesis, suggesting instead that feedforward processing is able to activate conceptual representations without complementary reentrant processing. PMID:28255263
Attentional and Perceptual Factors Affecting the Attentional Blink for Faces and Objects
ERIC Educational Resources Information Center
Landau, Ayelet N.; Bentin, Shlomo
2008-01-01
When 2 different visual targets presented among different distracters in a rapid serial visual presentation (RSVP) are separated by 400 ms or less, detection and identification of the 2nd targets are reduced relative to longer time intervals. This phenomenon, termed the "attentional blink" (AB), is attributed to the temporary engagement…
A new measure for the assessment of visual awareness in individuals with tunnel vision.
AlSaqr, Ali M; Dickinson, Chris M
2017-01-01
Individuals with a restricted peripheral visual field or tunnel vision (TV) have problems moving about and avoiding obstacles. Some individuals adapt better than others and some use assistive optical aids, so measurement of the visual field is not sufficient to describe their performance. In the present study, we developed a new clinical test called the 'Assessment of Visual Awareness (AVA)', which can be used to measure detection of peripheral targets. The participants were 20 patients with TV due to retinitis pigmentosa (PTV) and 50 normally sighted participants with simulated tunnel vision (STV) using goggles. In the AVA test, detection times were measured, when subjects searched for 24 individually presented, one degree targets, randomly positioned in a 60 degrees noise background. Head and eye movements were allowed and the presentation time was unlimited. The test validity was investigated by correlating the detection times with the 'percentage of preferred walking speed' (PPWS) and the 'number of collisions' on an indoor mobility course. In PTV and STV, the detection times had significant negative correlation with the field of view. The detection times had significant positive relations with target location. In the STV, the detection time was significantly negatively correlated with the PPWS and significantly positively correlated with the collisions score on the indoor mobility course. In the PTV, the relationship was not statistically significant. No significant difference in performance of STV was found when repeating the test one to two weeks later. The proposed AVA test was sensitive to the field of view and target location. The test is unique in design, quick, simple to deliver and both repeatable and valid. It could be a valuable tool to test different rehabilitation strategies in patients with TV. © 2016 Optometry Australia.
Stimulus-dependent modulation of visual neglect in a touch-screen cancellation task.
Keller, Ingo; Volkening, Katharina; Garbacenkaite, Ruta
2015-05-01
Patients with left-sided neglect frequently show omissions and repetitive behavior on cancellation tests. Using a touch-screen-based cancellation task, we tested how visual feedback and distracters influence the number of omissions and perseverations. Eighteen patients with left-sided visual neglect and 18 healthy controls performed four different cancellation tasks on an iPad touch screen: no feedback (the display did not change during the task), visual feedback (touched targets changed their color from black to green), visual feedback with distracters (20 distracters were evenly embedded in the display; detected targets changed their color from black to green), vanishing targets (touched targets disappeared from the screen). Except for the condition with vanishing targets, neglect patients had significantly more omissions and perseverations than healthy controls in the remaining three subtests. Both conditions providing feedback by changing the target color showed the highest number of omissions. Erasure of targets nearly diminished omissions completely. The highest rate of perseverations was observed in the no-feedback condition. The implementation of distracters led to a moderate number of perseverations. Visual feedback without distracters and vanishing targets abolished perseverations nearly completely. Visual feedback and the presence of distracters aggravated hemispatial neglect. This finding is compatible with impaired disengagement from the ipsilesional side as an important factor of visual neglect. Improvement of cancellation behavior with vanishing targets could have therapeutic implications. (c) 2015 APA, all rights reserved).
Inhibitory control differentiates rare target search performance in children.
Li, Hongting; Chan, John S Y; Cheung, Sui-Yin; Yan, Jin H
2012-02-01
Age-related differences in rare-target search are primarily explained by the speed-accuracy trade-off, primed responses, or decision making. The goal was to examine how motor inhibition influences visual search. Children pressed a key when a rare target was detected. On no-target trials, children withheld reactions. Response time (RT), hits, misses, correct rejection, and false alarms were measured. Tapping tests assessed motor control. Older children tapped faster, were more sensitive to rare targets (higher d'), and reacted more slowly than younger ones. Girls outperformed boys in search sensitivity but not in RT. Motor speed was closely associated with hit rate and RT. Results suggest that development of inhibitory control plays a key role in visual detection. The potential implications for cognitive-motor development and individual differences are discussed.
Impaired search for orientation but not color in hemi-spatial neglect.
Wilkinson, David; Ko, Philip; Milberg, William; McGlinchey, Regina
2008-01-01
Patients with hemi-spatial neglect have trouble finding targets defined by a conjunction of visual features. The problem is widely believed to stem from a high-level deficit in attentional deployment, which in turn has led to disagreement over whether the detection of basic features is also disrupted. If one assumes that the detection of salient visual features can be based on the output of spared 'preattentive' processes (Treisman and Gelade, 1980), then feature detection should remain intact. However, if one assumes that all forms of detection require at least a modicum of focused attention (Duncan and Humphreys, 1992), then all forms of search will be disrupted to some degree. Here we measured the detection of feature targets that were defined by either a unique color or orientation. Comparable detection rates were observed in non-neglected space, which indicated that both forms of search placed similar demands on attention. For either of the above accounts to be true, the two targets should therefore be detected with equal efficiency in the neglected field. We found that while the detection rate for color was normal in four of our five patients, all showed an increased reaction time and/or error rate for orientation. This result points to a selective deficit in orientation discrimination, and implies that neglect disrupts specific feature representations. That is, the effects of neglect on visual search are not only attentional but also perceptual.
The effect of saccade metrics on the corollary discharge contribution to perceived eye location
Bansal, Sonia; Jayet Bray, Laurence C.; Peterson, Matthew S.
2015-01-01
Corollary discharge (CD) is hypothesized to provide the movement information (direction and amplitude) required to compensate for the saccade-induced disruptions to visual input. Here, we investigated to what extent these conveyed metrics influence perceptual stability in human subjects with a target-displacement detection task. Subjects made saccades to targets located at different amplitudes (4°, 6°, or 8°) and directions (horizontal or vertical). During the saccade, the target disappeared and then reappeared at a shifted location either in the same direction or opposite to the movement vector. Subjects reported the target displacement direction, and from these reports we determined the perceptual threshold for shift detection and estimate of target location. Our results indicate that the thresholds for all amplitudes and directions generally scaled with saccade amplitude. Additionally, subjects on average produced hypometric saccades with an estimated CD gain <1. Finally, we examined the contribution of different error signals to perceptual performance, the saccade error (movement-to-movement variability in saccade amplitude) and visual error (distance between the fovea and the shifted target location). Perceptual judgment was not influenced by the fluctuations in movement amplitude, and performance was largely the same across movement directions for different magnitudes of visual error. Importantly, subjects reported the correct direction of target displacement above chance level for very small visual errors (<0.75°), even when these errors were opposite the target-shift direction. Collectively, these results suggest that the CD-based compensatory mechanisms for visual disruptions are highly accurate and comparable for saccades with different metrics. PMID:25761955
Experimental system for measurement of radiologists' performance by visual search task.
Maeda, Eriko; Yoshikawa, Takeharu; Nakashima, Ryoichi; Kobayashi, Kazufumi; Yokosawa, Kazuhiko; Hayashi, Naoto; Masutani, Yoshitaka; Yoshioka, Naoki; Akahane, Masaaki; Ohtomo, Kuni
2013-01-01
Detective performance of radiologists for "obvious" targets should be evaluated by visual search task instead of ROC analysis, but visual task have not been applied to radiology studies. The aim of this study was to set up an environment that allows visual search task in radiology, to evaluate its feasibility, and to preliminarily investigate the effect of career on the performance. In a darkroom, ten radiologists were asked to answer the type of lesion by pressing buttons, when images without lesions, with bulla, ground-glass nodule, and solid nodule were randomly presented on a display. Differences in accuracy and reaction times depending on board certification were investigated. The visual search task was successfully and feasibly performed. Radiologists were found to have high sensitivity, specificity, positive predictive values and negative predictive values in non-board and board groups. Reaction time was under 1 second for all target types in both groups. Board radiologists were significantly faster in answering for bulla, but there were no significant differences for other targets and values. We developed an experimental system that allows visual search experiment in radiology. Reaction time for detection of bulla was shortened with experience.
Visual detection of nucleic acids based on Mie scattering and the magnetophoretic effect.
Zhao, Zichen; Chen, Shan; Ho, John Kin Lim; Chieng, Ching-Chang; Chen, Ting-Hsuan
2015-12-07
Visual detection of nucleic acid biomarkers is a simple and convenient approach to point-of-care applications. However, issues of sensitivity and the handling of complex bio-fluids have posed challenges. Here we report on a visual method detecting nucleic acids using Mie scattering of polystyrene microparticles and the magnetophoretic effect. Magnetic microparticles (MMPs) and polystyrene microparticles (PMPs) were surface-functionalised with oligonucleotide probes, which can hybridise with target oligonucleotides in juxtaposition and lead to the formation of MMPs-targets-PMPs sandwich structures. Using an externally applied magnetic field, the magnetophoretic effect attracts the sandwich structure to the sidewall, which reduces the suspended PMPs and leads to a change in the light transmission via the Mie scattering. Based on the high extinction coefficient of the Mie scattering (∼3 orders of magnitude greater than that of the commonly used gold nanoparticles), our results showed the limit of detection to be 4 pM using a UV-Vis spectrometer or 10 pM by direct visual inspection. Meanwhile, we also demonstrated that this method is compatible with multiplex assays and detection in complex bio-fluids, such as whole blood or a pool of nucleic acids, without purification in advance. With a simplified operation procedure, low instrumentation requirement, high sensitivity and compatibility with complex bio-fluids, this method provides an ideal solution for visual detection of nucleic acids in resource-limited settings.
Visual pop-out in barn owls: Human-like behavior in the avian brain.
Orlowski, Julius; Beissel, Christian; Rohn, Friederike; Adato, Yair; Wagner, Hermann; Ben-Shahar, Ohad
2015-01-01
Visual pop-out is a phenomenon by which the latency to detect a target in a scene is independent of the number of other elements, the distractors. Pop-out is an effective visual-search guidance that occurs typically when the target is distinct in one feature from the distractors, thus facilitating fast detection of predators or prey. However, apart from studies on primates, pop-out has been examined in few species and demonstrated thus far in rats, archer fish, and pigeons only. To fill this gap, here we study pop-out in barn owls. These birds are a unique model system for such exploration because their lack of eye movements dictates visual behavior dominated by head movements. Head saccades and interspersed fixation periods can therefore be tracked and analyzed with a head-mounted wireless microcamera--the OwlCam. Using this methodology we confronted two owls with scenes containing search arrays of one target among varying numbers (15-63) of similar looking distractors. We tested targets distinct either by orientation (Experiment 1) or luminance contrast (Experiment 2). Search time and the number of saccades until the target was fixated remained largely independent of the number of distractors in both experiments. This suggests that barn owls can exhibit pop-out during visual search, thus expanding the group of species and brain structures that can cope with this fundamental visual behavior. The utility of our automatic analysis method is further discussed for other species and scientific questions.
Dangerous animals capture and maintain attention in humans.
Yorzinski, Jessica L; Penkunas, Michael J; Platt, Michael L; Coss, Richard G
2014-05-28
Predation is a major source of natural selection on primates and may have shaped attentional processes that allow primates to rapidly detect dangerous animals. Because ancestral humans were subjected to predation, a process that continues at very low frequencies, we examined the visual processes by which men and women detect dangerous animals (snakes and lions). We recorded the eye movements of participants as they detected images of a dangerous animal (target) among arrays of nondangerous animals (distractors) as well as detected images of a nondangerous animal (target) among arrays of dangerous animals (distractors). We found that participants were quicker to locate targets when the targets were dangerous animals compared with nondangerous animals, even when spatial frequency and luminance were controlled. The participants were slower to locate nondangerous targets because they spent more time looking at dangerous distractors, a process known as delayed disengagement, and looked at a larger number of dangerous distractors. These results indicate that dangerous animals capture and maintain attention in humans, suggesting that historical predation has shaped some facets of visual orienting and its underlying neural architecture in modern humans.
MacKay, Donald G; James, Lori E
2009-10-01
Two experiments compared the visual cognition performance of amnesic H.M. and memory-normal controls matched for age, background, intelligence, and education. In Experiment 1 H.M. exhibited deficits relative to the controls in detecting "erroneous objects" in complex visual scenes--for example, a bird flying inside a fishbowl. In Experiment 2 H.M. exhibited deficits relative to the controls in standard Hidden-Figure tasks when detecting unfamiliar targets but not when detecting familiar targets--for example, circles, squares, and right-angle triangles. H.M.'s visual cognition deficits were not due to his well-known problems in explicit learning and recall, inability to comprehend or remember the instructions, general slowness, motoric difficulties, low motivation, low IQ relative to the controls, or working-memory limitations. Parallels between H.M.'s selective deficits in visual cognition, language, and memory are discussed. These parallels contradict the standard "systems theory" account of H.M.'s condition but comport with the hypothesis that H.M. has difficulty representing unfamiliar but not familiar information in visual cognition, language, and memory. Implications of our results are discussed for binding theory and the ongoing debate over what counts as "memory" versus "not-memory."
Xu, Deshun; Wu, Xiaofang; Han, Jiankang; Chen, Liping; Ji, Lei; Yan, Wei; Shen, Yuehua
2015-12-01
Vibrio parahaemolyticus is a marine seafood-borne pathogen that causes gastrointestinal disorders in humans. In this study, we developed a cross-priming amplification (CPA) assay coupled with vertical flow (VF) visualization for rapid and sensitive detection of V. parahaemolyticus. This assay correctly detected all target strains (n = 13) and none of the non-target strains (n = 27). Small concentrations of V. parahaemolyticus (1.8 CFU/mL for pure cultures and 18 CFU/g for reconstituted samples) were detected within 1 h. CPA-VF can be applied at a large scale and can be used to detect V. parahaemolyticus strains rapidly in seafood and environmental samples, being especially useful in the field. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Technical Reports Server (NTRS)
Johnson, Walter W.; Liao, Min-Ju; Tse, Stephen
2003-01-01
The present experiment employed target detection tasks to investigate attentional deployment during visual search for target aircraft symbols on a cockpit display of traffic information (CDTI). Targets were defined by either a geometric property (aircraft on a collision course with Ownship) or a textual property (aircraft with associated altitude tags indicating an even altitude level). Effects of target location and target brightness (highlighting) were examined. Target location was systematically related to target detection time, and this interacted with the target's defining property (collision geometry or associated text). Highlighting (which was not linked to whether an aircraft symbol was the target) did not influence target detection time.
Visual detection following retinal damage: predictions of an inhomogeneous retino-cortical model
NASA Astrophysics Data System (ADS)
Arnow, Thomas L.; Geisler, Wilson S.
1996-04-01
A model of human visual detection performance has been developed, based on available anatomical and physiological data for the primate visual system. The inhomogeneous retino- cortical (IRC) model computes detection thresholds by comparing simulated neural responses to target patterns with responses to a uniform background of the same luminance. The model incorporates human ganglion cell sampling distributions; macaque monkey ganglion cell receptive field properties; macaque cortical cell contrast nonlinearities; and a optical decision rule based on ideal observer theory. Spatial receptive field properties of cortical neurons were not included. Two parameters were allowed to vary while minimizing the squared error between predicted and observed thresholds. One parameter was decision efficiency, the other was the relative strength of the ganglion-cell center and surround. The latter was only allowed to vary within a small range consistent with known physiology. Contrast sensitivity was measured for sinewave gratings as a function of spatial frequency, target size and eccentricity. Contrast sensitivity was also measured for an airplane target as a function of target size, with and without artificial scotomas. The results of these experiments, as well as contrast sensitivity data from the literature were compared to predictions of the IRC model. Predictions were reasonably good for grating and airplane targets.
Ryan, Denise S; Sia, Rose K; Stutzman, Richard D; Pasternak, Joseph F; Howard, Robin S; Howell, Christopher L; Maurer, Tana; Torres, Mark F; Bower, Kraig S
2017-01-01
To compare visual performance, marksmanship performance, and threshold target identification following wavefront-guided (WFG) versus wavefront-optimized (WFO) photorefractive keratectomy (PRK). In this prospective, randomized clinical trial, active duty U.S. military Soldiers, age 21 or over, electing to undergo PRK were randomized to undergo WFG (n = 27) or WFO (n = 27) PRK for myopia or myopic astigmatism. Binocular visual performance was assessed preoperatively and 1, 3, and 6 months postoperatively: Super Vision Test high contrast, Super Vision Test contrast sensitivity (CS), and 25% contrast acuity with night vision goggle filter. CS function was generated testing at five spatial frequencies. Marksmanship performance in low light conditions was evaluated in a firing tunnel. Target detection and identification performance was tested for probability of identification of varying target sets and probability of detection of humans in cluttered environments. Visual performance, CS function, marksmanship, and threshold target identification demonstrated no statistically significant differences over time between the two treatments. Exploratory regression analysis of firing range tasks at 6 months showed no significant differences or correlations between procedures. Regression analysis of vehicle and handheld probability of identification showed a significant association with pretreatment performance. Both WFG and WFO PRK results translate to excellent and comparable visual and military performance. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
Driver landmark and traffic sign identification in early Alzheimer's disease.
Uc, E Y; Rizzo, M; Anderson, S W; Shi, Q; Dawson, J D
2005-06-01
To assess visual search and recognition of roadside targets and safety errors during a landmark and traffic sign identification task in drivers with Alzheimer's disease. 33 drivers with probable Alzheimer's disease of mild severity and 137 neurologically normal older adults underwent a battery of visual and cognitive tests and were asked to report detection of specific landmarks and traffic signs along a segment of an experimental drive. The drivers with mild Alzheimer's disease identified significantly fewer landmarks and traffic signs and made more at-fault safety errors during the task than control subjects. Roadside target identification performance and safety errors were predicted by scores on standardised tests of visual and cognitive function. Drivers with Alzheimer's disease are impaired in a task of visual search and recognition of roadside targets; the demands of these targets on visual perception, attention, executive functions, and memory probably increase the cognitive load, worsening driving safety.
Odours reduce the magnitude of object substitution masking for matching visual targets in females.
Robinson, Amanda K; Laning, Julia; Reinhard, Judith; Mattingley, Jason B
2016-08-01
Recent evidence suggests that olfactory stimuli can influence early stages of visual processing, but there has been little focus on whether such olfactory-visual interactions convey an advantage in visual object identification. Moreover, despite evidence that some aspects of olfactory perception are superior in females than males, no study to date has examined whether olfactory influences on vision are gender-dependent. We asked whether inhalation of familiar odorants can modulate participants' ability to identify briefly flashed images of matching visual objects under conditions of object substitution masking (OSM). Across two experiments, we had male and female participants (N = 36 in each group) identify masked visual images of odour-related objects (e.g., orange, rose, mint) amongst nonodour-related distracters (e.g., box, watch). In each trial, participants inhaled a single odour that either matched or mismatched the masked, odour-related target. Target detection performance was analysed using a signal detection (d') approach. In females, but not males, matching odours significantly reduced OSM relative to mismatching odours, suggesting that familiar odours can enhance the salience of briefly presented visual objects. We conclude that olfactory cues exert a subtle influence on visual processes by transiently enhancing the salience of matching object representations. The results add to a growing body of literature that points towards consistent gender differences in olfactory perception.
Liu, Rudi; Huang, Yishun; Ma, Yanli; Jia, Shasha; Gao, Mingxuan; Li, Jiuxing; Zhang, Huimin; Xu, Dunming; Wu, Min; Chen, Yan; Zhu, Zhi; Yang, Chaoyong
2015-04-01
A target-responsive aptamer-cross-linked hydrogel was designed and synthesized for portable and visual quantitative detection of the toxin Ochratoxin A (OTA), which occurs in food and beverages. The hydrogel network forms by hybridization between one designed DNA strand containing the OTA aptamer and two complementary DNA strands grafting on linear polyacrylamide chains. Upon the introduction of OTA, the aptamer binds with OTA, leading to the dissociation of the hydrogel, followed by release of the preloaded gold nanoparticles (AuNPs), which can be observed by the naked eye. To enable sensitive visual and quantitative detection, we encapsulated Au@Pt core-shell nanoparticles (Au@PtNPs) in the hydrogel to generate quantitative readout in a volumetric bar-chart chip (V-Chip). In the V-Chip, Au@PtNPs catalyzes the oxidation of H2O2 to generate O2, which induces movement of an ink bar to a concentration-dependent distance for visual quantitative readout. Furthermore, to improve the detection limit in complex real samples, we introduced an immunoaffinity column (IAC) of OTA to enrich OTA from beer. After the enrichment, as low as 1.27 nM (0.51 ppb) OTA can be detected by the V-Chip, which satisfies the test requirement (2.0 ppb) by the European Commission. The integration of a target-responsive hydrogel with portable enrichment by IAC, as well as signal amplification and quantitative readout by a simple microfluidic device, offers a new method for portable detection of food safety hazard toxin OTA.
Rapid Processing of a Global Feature in the ON Visual Pathways of Behaving Monkeys.
Huang, Jun; Yang, Yan; Zhou, Ke; Zhao, Xudong; Zhou, Quan; Zhu, Hong; Yang, Yingshan; Zhang, Chunming; Zhou, Yifeng; Zhou, Wu
2017-01-01
Visual objects are recognized by their features. Whereas, some features are based on simple components (i.e., local features, such as orientation of line segments), some features are based on the whole object (i.e., global features, such as an object having a hole in it). Over the past five decades, behavioral, physiological, anatomical, and computational studies have established a general model of vision, which starts from extracting local features in the lower visual pathways followed by a feature integration process that extracts global features in the higher visual pathways. This local-to-global model is successful in providing a unified account for a vast sets of perception experiments, but it fails to account for a set of experiments showing human visual systems' superior sensitivity to global features. Understanding the neural mechanisms underlying the "global-first" process will offer critical insights into new models of vision. The goal of the present study was to establish a non-human primate model of rapid processing of global features for elucidating the neural mechanisms underlying differential processing of global and local features. Monkeys were trained to make a saccade to a target in the black background, which was different from the distractors (white circle) in color (e.g., red circle target), local features (e.g., white square target), a global feature (e.g., white ring with a hole target) or their combinations (e.g., red square target). Contrary to the predictions of the prevailing local-to-global model, we found that (1) detecting a distinction or a change in the global feature was faster than detecting a distinction or a change in color or local features; (2) detecting a distinction in color was facilitated by a distinction in the global feature, but not in the local features; and (3) detecting the hole was interfered by the local features of the hole (e.g., white ring with a squared hole). These results suggest that monkey ON visual systems have a subsystem that is more sensitive to distinctions in the global feature than local features. They also provide the behavioral constraints for identifying the underlying neural substrates.
NASA Astrophysics Data System (ADS)
Kaysheva, A. L.; Pleshakova, T. O.; Kopylov, A. T.; Shumov, I. D.; Iourov, I. Y.; Vorsanova, S. G.; Yurov, Y. B.; Ziborov, V. S.; Archakov, A. I.; Ivanov, Y. D.
2017-10-01
Possibility of detection of target proteins associated with development of autistic disorders in children with use of combined atomic force microscopy and mass spectrometry (AFM/MS) method is demonstrated. The proposed method is based on the combination of affine enrichment of proteins from biological samples and visualization of these proteins by AFM and MS analysis with quantitative detection of target proteins.
Surprise-Induced Blindness: A Stimulus-Driven Attentional Limit to Conscious Perception
ERIC Educational Resources Information Center
Asplund, Christopher L.; Todd, J. Jay; Snyder, A. P.; Gilbert, Christopher M.; Marois, Rene
2010-01-01
The cost of attending to a visual event can be the failure to consciously detect other events. This processing limitation is well illustrated by the attentional blink paradigm, in which searching for and attending to a target presented in a rapid serial visual presentation stream of distractors can impair one's ability to detect a second target…
Running the figure to the ground: figure-ground segmentation during visual search.
Ralph, Brandon C W; Seli, Paul; Cheng, Vivian O Y; Solman, Grayden J F; Smilek, Daniel
2014-04-01
We examined how figure-ground segmentation occurs across multiple regions of a visual array during a visual search task. Stimuli consisted of arrays of black-and-white figure-ground images in which roughly half of each image depicted a meaningful object, whereas the other half constituted a less meaningful shape. The colours of the meaningful regions of the targets and distractors were either the same (congruent) or different (incongruent). We found that incongruent targets took longer to locate than congruent targets (Experiments 1, 2, and 3) and that this segmentation-congruency effect decreased when the number of search items was reduced (Experiment 2). Furthermore, an analysis of eye movements revealed that participants spent more time scrutinising the target before confirming its identity on incongruent trials than on congruent trials (Experiment 3). These findings suggest that the distractor context influences target segmentation and detection during visual search. Copyright © 2014 Elsevier B.V. All rights reserved.
Image Discrimination Models for Object Detection in Natural Backgrounds
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.
2000-01-01
This paper reviews work accomplished and in progress at NASA Ames relating to visual target detection. The focus is on image discrimination models, starting with Watson's pioneering development of a simple spatial model and progressing through this model's descendents and extensions. The application of image discrimination models to target detection will be described and results reviewed for Rohaly's vehicle target data and the Search 2 data. The paper concludes with a description of work we have done to model the process by which observers learn target templates and methods for elucidating those templates.
Evolution and Optimality of Similar Neural Mechanisms for Perception and Action during Search
Zhang, Sheng; Eckstein, Miguel P.
2010-01-01
A prevailing theory proposes that the brain's two visual pathways, the ventral and dorsal, lead to differing visual processing and world representations for conscious perception than those for action. Others have claimed that perception and action share much of their visual processing. But which of these two neural architectures is favored by evolution? Successful visual search is life-critical and here we investigate the evolution and optimality of neural mechanisms mediating perception and eye movement actions for visual search in natural images. We implement an approximation to the ideal Bayesian searcher with two separate processing streams, one controlling the eye movements and the other stream determining the perceptual search decisions. We virtually evolved the neural mechanisms of the searchers' two separate pathways built from linear combinations of primary visual cortex receptive fields (V1) by making the simulated individuals' probability of survival depend on the perceptual accuracy finding targets in cluttered backgrounds. We find that for a variety of targets, backgrounds, and dependence of target detectability on retinal eccentricity, the mechanisms of the searchers' two processing streams converge to similar representations showing that mismatches in the mechanisms for perception and eye movements lead to suboptimal search. Three exceptions which resulted in partial or no convergence were a case of an organism for which the targets are equally detectable across the retina, an organism with sufficient time to foveate all possible target locations, and a strict two-pathway model with no interconnections and differential pre-filtering based on parvocellular and magnocellular lateral geniculate cell properties. Thus, similar neural mechanisms for perception and eye movement actions during search are optimal and should be expected from the effects of natural selection on an organism with limited time to search for food that is not equi-detectable across its retina and interconnected perception and action neural pathways. PMID:20838589
Visualization of hyperspectral imagery
NASA Astrophysics Data System (ADS)
Hogervorst, Maarten A.; Bijl, Piet; Toet, Alexander
2007-04-01
We developed four new techniques to visualize hyper spectral image data for man-in-the-loop target detection. The methods respectively: (1) display the subsequent bands as a movie ("movie"), (2) map the data onto three channels and display these as a colour image ("colour"), (3) display the correlation between the pixel signatures and a known target signature ("match") and (4) display the output of a standard anomaly detector ("anomaly"). The movie technique requires no assumptions about the target signature and involves no information loss. The colour technique produces a single image that can be displayed in real-time. A disadvantage of this technique is loss of information. A display of the match between a target signature and pixels and can be interpreted easily and fast, but this technique relies on precise knowledge of the target signature. The anomaly detector signifies pixels with signatures that deviate from the (local) background. We performed a target detection experiment with human observers to determine their relative performance with the four techniques,. The results show that the "match" presentation yields the best performance, followed by "movie" and "anomaly", while performance with the "colour" presentation was the poorest. Each scheme has its advantages and disadvantages and is more or less suited for real-time and post-hoc processing. The rationale is that the final interpretation is best done by a human observer. In contrast to automatic target recognition systems, the interpretation of hyper spectral imagery by the human visual system is robust to noise and image transformations and requires a minimal number of assumptions (about signature of target and background, target shape etc.) When more knowledge about target and background is available this may be used to help the observer interpreting the data (aided target detection).
Prestimulus EEG Power Predicts Conscious Awareness But Not Objective Visual Performance
Veniero, Domenica
2017-01-01
Abstract Prestimulus oscillatory neural activity has been linked to perceptual outcomes during performance of psychophysical detection and discrimination tasks. Specifically, the power and phase of low frequency oscillations have been found to predict whether an upcoming weak visual target will be detected or not. However, the mechanisms by which baseline oscillatory activity influences perception remain unclear. Recent studies suggest that the frequently reported negative relationship between α power and stimulus detection may be explained by changes in detection criterion (i.e., increased target present responses regardless of whether the target was present/absent) driven by the state of neural excitability, rather than changes in visual sensitivity (i.e., more veridical percepts). Here, we recorded EEG while human participants performed a luminance discrimination task on perithreshold stimuli in combination with single-trial ratings of perceptual awareness. Our aim was to investigate whether the power and/or phase of prestimulus oscillatory activity predict discrimination accuracy and/or perceptual awareness on a trial-by-trial basis. Prestimulus power (3–28 Hz) was inversely related to perceptual awareness ratings (i.e., higher ratings in states of low prestimulus power/high excitability) but did not predict discrimination accuracy. In contrast, prestimulus oscillatory phase did not predict awareness ratings or accuracy in any frequency band. These results provide evidence that prestimulus α power influences the level of subjective awareness of threshold visual stimuli but does not influence visual sensitivity when a decision has to be made regarding stimulus features. Hence, we find a clear dissociation between the influence of ongoing neural activity on conscious awareness and objective performance. PMID:29255794
Neurotechnology for intelligence analysts
NASA Astrophysics Data System (ADS)
Kruse, Amy A.; Boyd, Karen C.; Schulman, Joshua J.
2006-05-01
Geospatial Intelligence Analysts are currently faced with an enormous volume of imagery, only a fraction of which can be processed or reviewed in a timely operational manner. Computer-based target detection efforts have failed to yield the speed, flexibility and accuracy of the human visual system. Rather than focus solely on artificial systems, we hypothesize that the human visual system is still the best target detection apparatus currently in use, and with the addition of neuroscience-based measurement capabilities it can surpass the throughput of the unaided human severalfold. Using electroencephalography (EEG), Thorpe et al1 described a fast signal in the brain associated with the early detection of targets in static imagery using a Rapid Serial Visual Presentation (RSVP) paradigm. This finding suggests that it may be possible to extract target detection signals from complex imagery in real time utilizing non-invasive neurophysiological assessment tools. To transform this phenomenon into a capability for defense applications, the Defense Advanced Research Projects Agency (DARPA) currently is sponsoring an effort titled Neurotechnology for Intelligence Analysts (NIA). The vision of the NIA program is to revolutionize the way that analysts handle intelligence imagery, increasing both the throughput of imagery to the analyst and overall accuracy of the assessments. Successful development of a neurobiologically-based image triage system will enable image analysts to train more effectively and process imagery with greater speed and precision.
Searching for emotion or race: task-irrelevant facial cues have asymmetrical effects.
Lipp, Ottmar V; Craig, Belinda M; Frost, Mareka J; Terry, Deborah J; Smith, Joanne R
2014-01-01
Facial cues of threat such as anger and other race membership are detected preferentially in visual search tasks. However, it remains unclear whether these facial cues interact in visual search. If both cues equally facilitate search, a symmetrical interaction would be predicted; anger cues should facilitate detection of other race faces and cues of other race membership should facilitate detection of anger. Past research investigating this race by emotional expression interaction in categorisation tasks revealed an asymmetrical interaction. This suggests that cues of other race membership may facilitate the detection of angry faces but not vice versa. Utilising the same stimuli and procedures across two search tasks, participants were asked to search for targets defined by either race or emotional expression. Contrary to the results revealed in the categorisation paradigm, cues of anger facilitated detection of other race faces whereas differences in race did not differentially influence detection of emotion targets.
Qu, Xiaojun; Jin, Haojun; Liu, Yuqian; Sun, Qingjiang
2018-03-06
The combination of microbead array, isothermal amplification, and molecular signaling enables the continuous development of next-generation molecular diagnostic techniques. Herein we reported the implementation of nicking endonuclease-assisted strand displacement amplification reaction on quantum dots-encoded microbead (Qbead), and demonstrated its feasibility for multiplexed miRNA assay in real sample. The Qbead featured with well-defined core-shell superstructure with dual-colored quantum dots loaded in silica core and shell, respectively, exhibiting remarkably high optical encoding stability. Specially designed stem-loop-structured probes were immobilized onto the Qbead for specific target recognition and amplification. In the presence of low abundance of miRNA target, the target triggered exponential amplification, producing a large quantity of stem-G-quadruplexes, which could be selectively signaled by a fluorescent G-quadruplex intercalator. In one-step operation, the Qbead-based isothermal amplification and signaling generated emissive "core-shell-satellite" superstructure, changing the Qbead emission-color. The target abundance-dependent emission-color changes of the Qbead allowed direct, visual detection of specific miRNA target. This visualization method achieved limit of detection at the subfemtomolar level with a linear dynamic range of 4.5 logs, and point-mutation discrimination capability for precise miRNA analyses. The array of three encoded Qbeads could simultaneously quantify three miRNA biomarkers in ∼500 human hepatoma carcinoma cells. With the advancements in ease of operation, multiplexing, and visualization capabilities, the isothermal amplification-on-Qbead assay could potentially enable the development of point-of-care diagnostics.
The wide window of face detection.
Hershler, Orit; Golan, Tal; Bentin, Shlomo; Hochstein, Shaul
2010-08-20
Faces are detected more rapidly than other objects in visual scenes and search arrays, but the cause for this face advantage has been contested. In the present study, we found that under conditions of spatial uncertainty, faces were easier to detect than control targets (dog faces, clocks and cars) even in the absence of surrounding stimuli, making an explanation based only on low-level differences unlikely. This advantage improved with eccentricity in the visual field, enabling face detection in wider visual windows, and pointing to selective sparing of face detection at greater eccentricities. This face advantage might be due to perceptual factors favoring face detection. In addition, the relative face advantage is greater under flanked than non-flanked conditions, suggesting an additional, possibly attention-related benefit enabling face detection in groups of distracters.
Salience from the decision perspective: You know where it is before you know it is there.
Zehetleitner, Michael; Müller, Hermann J
2010-12-31
In visual search for feature contrast ("odd-one-out") singletons, identical manipulations of salience, whether by varying target-distractor similarity or dimensional redundancy of target definition, had smaller effects on reaction times (RTs) for binary localization decisions than for yes/no detection decisions. According to formal models of binary decisions, identical differences in drift rates would yield larger RT differences for slow than for fast decisions. From this principle and the present findings, it follows that decisions on the presence of feature contrast singletons are slower than decisions on their location. This is at variance with two classes of standard models of visual search and object recognition that assume a serial cascade of first detection, then localization and identification of a target object, but also inconsistent with models assuming that as soon as a target is detected all its properties, spatial as well as non-spatial (e.g., its category), are available immediately. As an alternative, we propose a model of detection and localization tasks based on random walk processes, which can account for the present findings.
Scrambling for anonymous visual communications
NASA Astrophysics Data System (ADS)
Dufaux, Frederic; Ebrahimi, Touradj
2005-08-01
In this paper, we present a system for anonymous visual communications. Target application is an anonymous video chat. The system is identifying faces in the video sequence by means of face detection or skin detection. The corresponding regions are subsequently scrambled. We investigate several approaches for scrambling, either in the image-domain or in the transform-domain. Experiment results show the effectiveness of the proposed system.
Acoustic facilitation of object movement detection during self-motion
Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.
2011-01-01
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050
Supèr, Hans; Lamme, Victor A F
2007-06-01
When and where are decisions made? In the visual system a saccade, which is a fast shift of gaze toward a target in the visual scene, is the behavioral outcome of a decision. Current neurophysiological data and reaction time models show that saccadic reaction times are determined by a build-up of activity in motor-related structures, such as the frontal eye fields. These structures depend on the sensory evidence of the stimulus. Here we use a delayed figure-ground detection task to show that late modulated activity in the visual cortex (V1) predicts saccadic reaction time. This predictive activity is part of the process of figure-ground segregation and is specific for the saccade target location. These observations indicate that sensory signals are directly involved in the decision of when and where to look.
Memory for found targets interferes with subsequent performance in multiple-target visual search.
Cain, Matthew S; Mitroff, Stephen R
2013-10-01
Multiple-target visual searches--when more than 1 target can appear in a given search display--are commonplace in radiology, airport security screening, and the military. Whereas 1 target is often found accurately, additional targets are more likely to be missed in multiple-target searches. To better understand this decrement in 2nd-target detection, here we examined 2 potential forms of interference that can arise from finding a 1st target: interference from the perceptual salience of the 1st target (a now highly relevant distractor in a known location) and interference from a newly created memory representation for the 1st target. Here, we found that removing found targets from the display or making them salient and easily segregated color singletons improved subsequent search accuracy. However, replacing found targets with random distractor items did not improve subsequent search accuracy. Removing and highlighting found targets likely reduced both a target's visual salience and its memory load, whereas replacing a target removed its visual salience but not its representation in memory. Collectively, the current experiments suggest that the working memory load of a found target has a larger effect on subsequent search accuracy than does its perceptual salience. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Olivetti Belardinelli, Marta; Santangelo, Valerio
2005-07-08
This paper examines the characteristics of spatial attention orienting in situations of visual impairment. Two groups of subjects, respectively schizophrenic and blind, with different degrees of visual spatial information impairment, were tested. In Experiment 1, the schizophrenic subjects were instructed to detect an auditory target, which was preceded by a visual cue. The cue could appear in the same location as the target, separated from it respectively by the vertical visual meridian (VM), the vertical head-centered meridian (HCM) or another meridian. Similarly to normal subjects tested with the same paradigm (Ferlazzo, Couyoumdjian, Padovani, and Olivetti Belardinelli, 2002), schizophrenic subjects showed slower reactions times (RTs) when cued, and when the target locations were on the opposite sides of the HCM. This HCM effect strengthens the assumption that different auditory and visual spatial maps underlie the representation of attention orienting mechanisms. In Experiment 2, blind subjects were asked to detect an auditory target, which had been preceded by an auditory cue, while staring at an imaginary point. The point was located either to the left or to the right, in order to control for ocular movements and maintain the dissociation between the HCM and the VM. Differences between crossing and no-crossing conditions of HCM were not found. Therefore it is possible to consider the HCM effect as a consequence of the interaction between visual and auditory modalities. Related theoretical issues are also discussed.
Contextual cueing: implicit learning and memory of visual context guides spatial attention.
Chun, M M; Jiang, Y
1998-06-01
Global context plays an important, but poorly understood, role in visual tasks. This study demonstrates that a robust memory for visual context exists to guide spatial attention. Global context was operationalized as the spatial layout of objects in visual search displays. Half of the configurations were repeated across blocks throughout the entire session, and targets appeared within consistent locations in these arrays. Targets appearing in learned configurations were detected more quickly. This newly discovered form of search facilitation is termed contextual cueing. Contextual cueing is driven by incidentally learned associations between spatial configurations (context) and target locations. This benefit was obtained despite chance performance for recognizing the configurations, suggesting that the memory for context was implicit. The results show how implicit learning and memory of visual context can guide spatial attention towards task-relevant aspects of a scene.
Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B.; Franklin, Anna
2016-01-01
The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d’) and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object’s stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain. PMID:27023274
Forder, Lewis; Taylor, Olivia; Mankin, Helen; Scott, Ryan B; Franklin, Anna
2016-01-01
The idea that language can affect how we see the world continues to create controversy. A potentially important study in this field has shown that when an object is suppressed from visual awareness using continuous flash suppression (a form of binocular rivalry), detection of the object is differently affected by a preceding word prime depending on whether the prime matches or does not match the object. This may suggest that language can affect early stages of vision. We replicated this paradigm and further investigated whether colour terms likewise influence the detection of colours or colour-associated object images suppressed from visual awareness by continuous flash suppression. This method presents rapidly changing visual noise to one eye while the target stimulus is presented to the other. It has been shown to delay conscious perception of a target for up to several minutes. In Experiment 1 we presented greyscale photos of objects. They were either preceded by a congruent object label, an incongruent label, or white noise. Detection sensitivity (d') and hit rates were significantly poorer for suppressed objects preceded by an incongruent label compared to a congruent label or noise. In Experiment 2, targets were coloured discs preceded by a colour term. Detection sensitivity was significantly worse for suppressed colour patches preceded by an incongruent colour term as compared to a congruent term or white noise. In Experiment 3 targets were suppressed greyscale object images preceded by an auditory presentation of a colour term. On congruent trials the colour term matched the object's stereotypical colour and on incongruent trials the colour term mismatched. Detection sensitivity was significantly poorer on incongruent trials than congruent trials. Overall, these findings suggest that colour terms affect awareness of coloured stimuli and colour- associated objects, and provide new evidence for language-perception interaction in the brain.
Seeing Objects as Faces Enhances Object Detection.
Takahashi, Kohske; Watanabe, Katsumi
2015-10-01
The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness.
Seeing Objects as Faces Enhances Object Detection
Watanabe, Katsumi
2015-01-01
The face is a special visual stimulus. Both bottom-up processes for low-level facial features and top-down modulation by face expectations contribute to the advantages of face perception. However, it is hard to dissociate the top-down factors from the bottom-up processes, since facial stimuli mandatorily lead to face awareness. In the present study, using the face pareidolia phenomenon, we demonstrated that face awareness, namely seeing an object as a face, enhances object detection performance. In face pareidolia, some people see a visual stimulus, for example, three dots arranged in V shape, as a face, while others do not. This phenomenon allows us to investigate the effect of face awareness leaving the stimulus per se unchanged. Participants were asked to detect a face target or a triangle target. While target per se was identical between the two tasks, the detection sensitivity was higher when the participants recognized the target as a face. This was the case irrespective of the stimulus eccentricity or the vertical orientation of the stimulus. These results demonstrate that seeing an object as a face facilitates object detection via top-down modulation. The advantages of face perception are, therefore, at least partly, due to face awareness. PMID:27648219
Swallow, Khena M; Jiang, Yuhong V
2010-04-01
Recent work on event perception suggests that perceptual processing increases when events change. An important question is how such changes influence the way other information is processed, particularly during dual-task performance. In this study, participants monitored a long series of distractor items for an occasional target as they simultaneously encoded unrelated background scenes. The appearance of an occasional target could have two opposite effects on the secondary task: It could draw attention away from the second task, or, as a change in the ongoing event, it could improve secondary task performance. Results were consistent with the second possibility. Memory for scenes presented simultaneously with the targets was better than memory for scenes that preceded or followed the targets. This effect was observed when the primary detection task involved visual feature oddball detection, auditory oddball detection, and visual color-shape conjunction detection. It was eliminated when the detection task was omitted, and when it required an arbitrary response mapping. The appearance of occasional, task-relevant events appears to trigger a temporal orienting response that facilitates processing of concurrently attended information (Attentional Boost Effect). Copyright 2009 Elsevier B.V. All rights reserved.
Swallow, Khena M.; Jiang, Yuhong V.
2009-01-01
Recent work on event perception suggests that perceptual processing increases when events change. An important question is how such changes influence the way other information is processed, particularly during dual-task performance. In this study, participants monitored a long series of distractor items for an occasional target as they simultaneously encoded unrelated background scenes. The appearance of an occasional target could have two opposite effects on the secondary task: It could draw attention away from the second task, or, as a change in the ongoing event, it could improve secondary task performance. Results were consistent with the second possibility. Memory for scenes presented simultaneously with the targets was better than memory for scenes that preceded or followed the targets. This effect was observed when the primary detection task involved visual feature oddball detection, auditory oddball detection, and visual color-shape conjunction detection. It was eliminated when the detection task was omitted, and when it required an arbitrary response mapping. The appearance of occasional, task-relevant events appears to trigger a temporal orienting response that facilitates processing of concurrently attended information (Attentional Boost Effect). PMID:20080232
Do People Take Stimulus Correlations into Account in Visual Search (Open Source)
2016-03-10
RESEARCH ARTICLE Do People Take Stimulus Correlations into Account in Visual Search ? Manisha Bhardwaj1, Ronald van den Berg2,3, Wei Ji Ma2,4...visual search experiments, distractors are often statistically independent of each other. However, stimuli in more naturalistic settings are often...contribute to bridging the gap between artificial and natural visual search tasks. Introduction Visual target detection in displays consisting of multiple
Simple summation rule for optimal fixation selection in visual search.
Najemnik, Jiri; Geisler, Wilson S
2009-06-01
When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.
Perceptual integration of motion and form information: evidence of parallel-continuous processing.
von Mühlenen, A; Müller, H J
2000-04-01
In three visual search experiments, the processes involved in the efficient detection of motion-form conjunction targets were investigated. Experiment 1 was designed to estimate the relative contributions of stationary and moving nontargets to the search rate. Search rates were primarily determined by the number of moving nontargets; stationary nontargets sharing the target form also exerted a significant effect, but this was only about half as strong as that of moving nontargets; stationary nontargets not sharing the target form had little influence. In Experiments 2 and 3, the effects of display factors influencing the visual (form) quality of moving items (movement speed and item size) were examined. Increasing the speed of the moving items (> 1.5 degrees/sec) facilitated target detection when the task required segregation of the moving from the stationary items. When no segregation was necessary, increasing the movement speed impaired performance: With large display items, motion speed had little effect on target detection, but with small items, search efficiency declined when items moved faster than 1.5 degrees/sec. This pattern indicates that moving nontargets exert a strong effect on the search rate (Experiment 1) because of the loss of visual quality for moving items above a certain movement speed. A parallel-continuous processing account of motion-form conjunction search is proposed, which combines aspects of Guided Search (Wolfe, 1994) and attentional engagement theory (Duncan & Humphreys, 1989).
Optical Molecular Imaging for Diagnosing Intestinal Diseases
Kim, Sang-Yeob
2013-01-01
Real-time visualization of the molecular signature of cells can be achieved with advanced targeted imaging techniques using molecular probes and fluorescence endoscopy. This molecular optical imaging in gastrointestinal endoscopy is promising for improving the detection of neoplastic lesions, their characterization for patient stratification, and the assessment of their response to molecular targeted therapy and radiotherapy. In inflammatory bowel disease, this method can be used to detect dysplasia in the presence of background inflammation and to visualize inflammatory molecular targets for assessing disease severity and prognosis. Several preclinical and clinical trials have applied this method in endoscopy; however, this field has just started to evolve. Hence, many problems have yet to be solved to enable the clinical application of this novel method. PMID:24340254
Maruyama, Fumito; Kenzaka, Takehiko; Yamaguchi, Nobuyasu; Tani, Katsuji; Nasu, Masao
2005-01-01
Rolling circle amplification (RCA) generates large single-stranded and tandem repeats of target DNA as amplicons. This technique was applied to in situ nucleic acid amplification (in situ RCA) to visualize and count single Escherichia coli cells carrying a specific gene sequence. The method features (i) one short target sequence (35 to 39 bp) that allows specific detection; (ii) maintaining constant fluorescent intensity of positive cells permeabilized extensively after amplicon detection by fluorescence in situ hybridization, which facilitates the detection of target bacteria in various physiological states; and (iii) reliable enumeration of target bacteria by concentration on a gelatin-coated membrane filter. To test our approach, the presence of the following genes were visualized by in situ RCA: green fluorescent protein gene, the ampicillin resistance gene and the replication origin region on multicopy pUC19 plasmid, as well as the single-copy Shiga-like toxin gene on chromosomes inside E. coli cells. Fluorescent antibody staining after in situ RCA also simultaneously identified cells harboring target genes and determined the specificity of in situ RCA. E. coli cells in a nonculturable state from a prolonged incubation were periodically sampled and used for plasmid uptake study. The numbers of cells taking up plasmids determined by in situ RCA was up to 106-fold higher than that measured by selective plating. In addition, in situ RCA allowed the detection of cells taking up plasmids even when colony-forming cells were not detected during the incubation period. By optimizing the cell permeabilization condition for in situ RCA, this method can become a valuable tool for studying free DNA uptake, especially in nonculturable bacteria. PMID:16332770
Transient Distraction and Attentional Control during a Sustained Selective Attention Task.
Demeter, Elise; Woldorff, Marty G
2016-07-01
Distracting stimuli in the environment can pull our attention away from our goal-directed tasks. fMRI studies have implicated regions in right frontal cortex as being particularly important for processing distractors [e.g., de Fockert, J. W., & Theeuwes, J. Role of frontal cortex in attentional capture by singleton distractors. Brain and Cognition, 80, 367-373, 2012; Demeter, E., Hernandez-Garcia, L., Sarter, M., & Lustig, C. Challenges to attention: A continuous arterial spin labeling (ASL) study of the effects of distraction on sustained attention. Neuroimage, 54, 1518-1529, 2011]. Less is known, however, about the timing and sequence of how right frontal or other brain regions respond selectively to distractors and how distractors impinge upon the cascade of processes related to detecting and processing behaviorally relevant target stimuli. Here we used EEG and ERPs to investigate the neural consequences of a perceptually salient but task-irrelevant distractor on the detection of rare target stimuli embedded in a rapid, serial visual presentation (RSVP) stream. We found that distractors that occur during the presentation of a target interfere behaviorally with detection of those targets, reflected by reduced detection rates, and that these missed targets show a reduced amplitude of the long-latency, detection-related P3 component. We also found that distractors elicited a right-lateralized frontal negativity beginning at 100 msec, whose amplitude negatively correlated across participants with their distraction-related behavioral impairment. Finally, we also quantified the instantaneous amplitude of the steady-state visual evoked potentials elicited by the RSVP stream and found that the occurrence of a distractor resulted in a transient amplitude decrement of the steady-state visual evoked potential, presumably reflecting the pull of attention away from the RSVP stream when distracting stimuli occur in the environment.
A novel visual saliency detection method for infrared video sequences
NASA Astrophysics Data System (ADS)
Wang, Xin; Zhang, Yuzhen; Ning, Chen
2017-12-01
Infrared video applications such as target detection and recognition, moving target tracking, and so forth can benefit a lot from visual saliency detection, which is essentially a method to automatically localize the ;important; content in videos. In this paper, a novel visual saliency detection method for infrared video sequences is proposed. Specifically, for infrared video saliency detection, both the spatial saliency and temporal saliency are considered. For spatial saliency, we adopt a mutual consistency-guided spatial cues combination-based method to capture the regions with obvious luminance contrast and contour features. For temporal saliency, a multi-frame symmetric difference approach is proposed to discriminate salient moving regions of interest from background motions. Then, the spatial saliency and temporal saliency are combined to compute the spatiotemporal saliency using an adaptive fusion strategy. Besides, to highlight the spatiotemporal salient regions uniformly, a multi-scale fusion approach is embedded into the spatiotemporal saliency model. Finally, a Gestalt theory-inspired optimization algorithm is designed to further improve the reliability of the final saliency map. Experimental results demonstrate that our method outperforms many state-of-the-art saliency detection approaches for infrared videos under various backgrounds.
Crowding by a single bar: probing pattern recognition mechanisms in the visual periphery.
Põder, Endel
2014-11-06
Whereas visual crowding does not greatly affect the detection of the presence of simple visual features, it heavily inhibits combining them into recognizable objects. Still, crowding effects have rarely been directly related to general pattern recognition mechanisms. In this study, pattern recognition mechanisms in visual periphery were probed using a single crowding feature. Observers had to identify the orientation of a rotated T presented briefly in a peripheral location. Adjacent to the target, a single bar was presented. The bar was either horizontal or vertical and located in a random direction from the target. It appears that such a crowding bar has very strong and regular effects on the identification of the target orientation. The observer's responses are determined by approximate relative positions of basic visual features; exact image-based similarity to the target is not important. A version of the "standard model" of object recognition with second-order features explains the main regularities of the data. © 2014 ARVO.
Detecting and visualizing weak signatures in hyperspectral data
NASA Astrophysics Data System (ADS)
MacPherson, Duncan James
This thesis evaluates existing techniques for detecting weak spectral signatures from remotely sensed hyperspectral data. Algorithms are presented that successfully detect hard-to-find 'mystery' signatures in unknown cluttered backgrounds. The term 'mystery' is used to describe a scenario where the spectral target and background endmembers are unknown. Sub-Pixel analysis and background suppression are used to find deeply embedded signatures which can be less than 10% of the total signal strength. Existing 'mystery target' detection algorithms are derived and compared. Several techniques are shown to be superior both visually and quantitatively. Detection performance is evaluated using confidence metrics that are developed. A multiple algorithm approach is shown to improve detection confidence significantly. Although the research focuses on remote sensing applications, the algorithms presented can be applied to a wide variety of diverse fields such as medicine, law enforcement, manufacturing, earth science, food production, and astrophysics. The algorithms are shown to be general and can be applied to both the reflective and emissive parts of the electromagnetic spectrum. The application scope is a broad one and the final results open new opportunities for many specific applications including: land mine detection, pollution and hazardous waste detection, crop abundance calculations, volcanic activity monitoring, detecting diseases in food, automobile or airplane target recognition, cancer detection, mining operations, extracting galactic gas emissions, etc.
Shifting attention in viewer- and object-based reference frames after unilateral brain injury.
List, Alexandra; Landau, Ayelet N; Brooks, Joseph L; Flevaris, Anastasia V; Fortenbaugh, Francesca C; Esterman, Michael; Van Vleet, Thomas M; Albrecht, Alice R; Alvarez, Bryan D; Robertson, Lynn C; Schendel, Krista
2011-06-01
The aims of the present study were to investigate the respective roles that object- and viewer-based reference frames play in reorienting visual attention, and to assess their influence after unilateral brain injury. To do so, we studied 16 right hemisphere injured (RHI) and 13 left hemisphere injured (LHI) patients. We used a cueing design that manipulates the location of cues and targets relative to a display comprised of two rectangles (i.e., objects). Unlike previous studies with patients, we presented all cues at midline rather than in the left or right visual fields. Thus, in the critical conditions in which targets were presented laterally, reorienting of attention was always from a midline cue. Performance was measured for lateralized target detection as a function of viewer-based (contra- and ipsilesional sides) and object-based (requiring reorienting within or between objects) reference frames. As expected, contralesional detection was slower than ipsilesional detection for the patients. More importantly, objects influenced target detection differently in the contralesional and ipsilesional fields. Contralesionally, reorienting to a target within the cued object took longer than reorienting to a target in the same location but in the uncued object. This finding is consistent with object-based neglect. Ipsilesionally, the means were in the opposite direction. Furthermore, no significant difference was found in object-based influences between the patient groups (RHI vs. LHI). These findings are discussed in the context of reference frames used in reorienting attention for target detection. Published by Elsevier Ltd.
Mahr, Angela; Wentura, Dirk
2014-02-01
Findings from three experiments support the conclusion that auditory primes facilitate the processing of related targets. In Experiments 1 and 2, we employed a crossmodal Stroop color identification task with auditory color words (as primes) and visual color patches (as targets). Responses were faster for congruent priming, in comparison to neutral or incongruent priming. This effect also emerged for different levels of time compression of the auditory primes (to 30 % and 10 % of the original length; i.e., 120 and 40 ms) and turned out to be even more pronounced under high-perceptual-load conditions (Exps. 1 and 2). In Experiment 3, target-present or -absent decisions for brief target displays had to be made, thereby ruling out response-priming processes as a cause of the congruency effects. Nevertheless, target detection (d') was increased by congruent primes (30 % compression) in comparison to incongruent or neutral primes. Our results suggest semantic object-based auditory-visual interactions, which rapidly increase the denoted target object's salience. This would apply, in particular, to complex visual scenes.
Interaction between numbers and size during visual search.
Krause, Florian; Bekkering, Harold; Pratt, Jay; Lindemann, Oliver
2017-05-01
The current study investigates an interaction between numbers and physical size (i.e. size congruity) in visual search. In three experiments, participants had to detect a physically large (or small) target item among physically small (or large) distractors in a search task comprising single-digit numbers. The relative numerical size of the digits was varied, such that the target item was either among the numerically large or small numbers in the search display and the relation between numerical and physical size was either congruent or incongruent. Perceptual differences of the stimuli were controlled by a condition in which participants had to search for a differently coloured target item with the same physical size and by the usage of LCD-style numbers that were matched in visual similarity by shape transformations. The results of all three experiments consistently revealed that detecting a physically large target item is significantly faster when the numerical size of the target item is large as well (congruent), compared to when it is small (incongruent). This novel finding of a size congruity effect in visual search demonstrates an interaction between numerical and physical size in an experimental setting beyond typically used binary comparison tasks, and provides important new evidence for the notion of shared cognitive codes for numbers and sensorimotor magnitudes. Theoretical consequences for recent models on attention, magnitude representation and their interactions are discussed.
NASA Astrophysics Data System (ADS)
Haigang, Sui; Zhina, Song
2016-06-01
Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.
Decision-level fusion of SAR and IR sensor information for automatic target detection
NASA Astrophysics Data System (ADS)
Cho, Young-Rae; Yim, Sung-Hyuk; Cho, Hyun-Woong; Won, Jin-Ju; Song, Woo-Jin; Kim, So-Hyeon
2017-05-01
We propose a decision-level architecture that combines synthetic aperture radar (SAR) and an infrared (IR) sensor for automatic target detection. We present a new size-based feature, called target-silhouette to reduce the number of false alarms produced by the conventional target-detection algorithm. Boolean Map Visual Theory is used to combine a pair of SAR and IR images to generate the target-enhanced map. Then basic belief assignment is used to transform this map into a belief map. The detection results of sensors are combined to build the target-silhouette map. We integrate the fusion mass and the target-silhouette map on the decision level to exclude false alarms. The proposed algorithm is evaluated using a SAR and IR synthetic database generated by SE-WORKBENCH simulator, and compared with conventional algorithms. The proposed fusion scheme achieves higher detection rate and lower false alarm rate than the conventional algorithms.
Detection of emotional faces: salient physical features guide effective visual search.
Calvo, Manuel G; Nummenmaa, Lauri
2008-08-01
In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
The role of the right posterior parietal cortex in temporal order judgment.
Woo, Sung-Ho; Kim, Ki-Hyun; Lee, Kyoung-Min
2009-03-01
Perceived order of two consecutive stimuli may not correspond to the order of their physical onsets. Such a disagreement presumably results from a difference in the speed of stimulus processing toward central decision mechanisms. Since previous evidence suggests that the right posterior parietal cortex (PPC) plays a role in modulating the processing speed of a visual target, we applied single-pulse TMS over the region in 14 normal subjects, while they judged the temporal order of two consecutive visual stimuli. Stimulus-onset-asynchrony (SOA) randomly varied between -100 and 100 ms in 20-ms steps (with a positive SOA when a target appeared on the right hemi-field before the other on the left), and a point of subjective simultaneity was measured for individual subjects. TMS stimulation was time-locked at 50, 100, 150, and 200 ms after the onset of the first stimulus, and results in trials with TMS on right PPC were compared with those in trials without TMS. TMS over the right PPC delayed the detection of a visual target in the contralateral, i.e., left hemi-field by 24 (+/-7 SE) ms and 16 (+/-4 SE) ms, when the stimulation was given at 50 and 100 ms after the first target onset. In contrast, TMS on the left PPC was not effective. These results show that the right PPC is important in a timely detection of a target appearing on the left visual field, especially in competition with another target simultaneously appearing in the opposite field.
ERIC Educational Resources Information Center
Buchholz, J.; Davies, A.A.
2005-01-01
Performance on a covert visual attention task is compared between a group of adults with developmental dyslexia (specifically phonological difficulties) and a group of age and IQ matched controls. The group with dyslexia were generally slower to detect validly-cued targets. Costs of shifting attention toward the periphery when the target was…
Pop-out in visual search of moving targets in the archer fish.
Ben-Tov, Mor; Donchin, Opher; Ben-Shahar, Ohad; Segev, Ronen
2015-03-10
Pop-out in visual search reflects the capacity of observers to rapidly detect visual targets independent of the number of distracting objects in the background. Although it may be beneficial to most animals, pop-out behaviour has been observed only in mammals, where neural correlates are found in primary visual cortex as contextually modulated neurons that encode aspects of saliency. Here we show that archer fish can also utilize this important search mechanism by exhibiting pop-out of moving targets. We explore neural correlates of this behaviour and report the presence of contextually modulated neurons in the optic tectum that may constitute the neural substrate for a saliency map. Furthermore, we find that both behaving fish and neural responses exhibit additive responses to multiple visual features. These findings suggest that similar neural computations underlie pop-out behaviour in mammals and fish, and that pop-out may be a universal search mechanism across all vertebrates.
Sequential sensory and decision processing in posterior parietal cortex
Ibos, Guilhem; Freedman, David J
2017-01-01
Decisions about the behavioral significance of sensory stimuli often require comparing sensory inference of what we are looking at to internal models of what we are looking for. Here, we test how neuronal selectivity for visual features is transformed into decision-related signals in posterior parietal cortex (area LIP). Monkeys performed a visual matching task that required them to detect target stimuli composed of conjunctions of color and motion-direction. Neuronal recordings from area LIP revealed two main findings. First, the sequential processing of visual features and the selection of target-stimuli suggest that LIP is involved in transforming sensory information into decision-related signals. Second, the patterns of color and motion selectivity and their impact on decision-related encoding suggest that LIP plays a role in detecting target stimuli by comparing bottom-up sensory inputs (what the monkeys were looking at) and top-down cognitive encoding inputs (what the monkeys were looking for). DOI: http://dx.doi.org/10.7554/eLife.23743.001 PMID:28418332
Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E
2008-10-21
Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.
A Model for the Detection of Moving Targets in Visual Clutter Inspired by Insect Physiology
2008-07-01
paper: SDW PS DCO. References 1. Wagner H (1986) Flight performance and visual control of flight of the free- flying housefly (Musca domestica L) 3...differences in the chasing behaviour of houseflies (musca). Biol Cybern 32: 239–241. 3. Land MF (1997) Visual acuity in insects. Annu Rev Entomol 42: 147
An Automated Directed Spectral Search Methodology for Small Target Detection
NASA Astrophysics Data System (ADS)
Grossman, Stanley I.
Much of the current efforts in remote sensing tackle macro-level problems such as determining the extent of wheat in a field, the general health of vegetation or the extent of mineral deposits in an area. However, for many of the remaining remote sensing challenges being studied currently, such as border protection, drug smuggling, treaty verification, and the war on terror, most targets are very small in nature - a vehicle or even a person. While in typical macro-level problems the objective vegetation is in the scene, for small target detection problems it is not usually known if the desired small target even exists in the scene, never mind finding it in abundance. The ability to find specific small targets, such as vehicles, typifies this problem. Complicating the analyst's life, the growing number of available sensors is generating mountains of imagery outstripping the analysts' ability to visually peruse them. This work presents the important factors influencing spectral exploitation using multispectral data and suggests a different approach to small target detection. The methodology of directed search is presented, including the use of scene-modeled spectral libraries, various search algorithms, and traditional statistical and ROC curve analysis. The work suggests a new metric to calibrate analysis labeled the analytic sweet spot as well as an estimation method for identifying the sweet spot threshold for an image. It also suggests a new visualization aid for highlighting the target in its entirety called nearest neighbor inflation (NNI). It brings these all together to propose that these additions to the target detection arena allow for the construction of a fully automated target detection scheme. This dissertation next details experiments to support the hypothesis that the optimum detection threshold is the analytic sweet spot and that the estimation method adequately predicts it. Experimental results and analysis are presented for the proposed directed search techniques of spectral image based small target detection. It offers evidence of the functionality of the NNI visualization and also provides evidence that the increased spectral dimensionality of the 8-band Worldview-2 datasets provides noteworthy improvement in results over traditional 4-band multispectral datasets. The final experiment presents the results from a prototype fully automated target detection scheme in support of the overarching premise. This work establishes the analytic sweet spot as the optimum threshold defined as the point where error detection rate curves -- false detections vs. missing detections -- cross. At this point the errors are minimized while the detection rate is maximized. It then demonstrates that taking the first moment statistic of the histogram of calculated target detection values from a detection search with test threshold set arbitrarily high will estimate the analytic sweet spot for that image. It also demonstrates that directed search techniques -- when utilized with appropriate scene-specific modeled signatures and atmospheric compensations -- perform at least as well as in-scene search techniques 88% of the time and grossly under-performing only 11% of the time; the in-scene only performs as well or better 50% of the time. It further demonstrates the clear advantage increased multispectral dimensionality brings to detection searches improving performance in 50% of the cases while performing at least as well 72% of the time. Lastly, it presents evidence that a fully automated prototype performs as anticipated laying the groundwork for further research into fully automated processes for small target detection.
Examining perceptual and conceptual set biases in multiple-target visual search.
Biggs, Adam T; Adamo, Stephen H; Dowd, Emma Wu; Mitroff, Stephen R
2015-04-01
Visual search is a common practice conducted countless times every day, and one important aspect of visual search is that multiple targets can appear in a single search array. For example, an X-ray image of airport luggage could contain both a water bottle and a gun. Searchers are more likely to miss additional targets after locating a first target in multiple-target searches, which presents a potential problem: If airport security officers were to find a water bottle, would they then be more likely to miss a gun? One hypothetical cause of multiple-target search errors is that searchers become biased to detect additional targets that are similar to a found target, and therefore become less likely to find additional targets that are dissimilar to the first target. This particular hypothesis has received theoretical, but little empirical, support. In the present study, we tested the bounds of this idea by utilizing "big data" obtained from the mobile application Airport Scanner. Multiple-target search errors were substantially reduced when the two targets were identical, suggesting that the first-found target did indeed create biases during subsequent search. Further analyses delineated the nature of the biases, revealing both a perceptual set bias (i.e., a bias to find additional targets with features similar to those of the first-found target) and a conceptual set bias (i.e., a bias to find additional targets with a conceptual relationship to the first-found target). These biases are discussed in terms of the implications for visual-search theories and applications for professional visual searchers.
Contextual cueing of pop-out visual search: when context guides the deployment of attention.
Geyer, Thomas; Zehetleitner, Michael; Müller, Hermann J
2010-05-01
Visual context information can guide attention in demanding (i.e., inefficient) search tasks. When participants are repeatedly presented with identically arranged ('repeated') displays, reaction times are faster relative to newly composed ('non-repeated') displays. The present article examines whether this 'contextual cueing' effect operates also in simple (i.e., efficient) search tasks and if so, whether there it influences target, rather than response, selection. The results were that singleton-feature targets were detected faster when the search items were presented in repeated, rather than non-repeated, arrangements. Importantly, repeated, relative to novel, displays also led to an increase in signal detection accuracy. Thus, contextual cueing can expedite the selection of pop-out targets, most likely by enhancing feature contrast signals at the overall-salience computation stage.
McAnally, Ken I.; Morris, Adam P.; Best, Christopher
2017-01-01
Metacognitive monitoring and control of situation awareness (SA) are important for a range of safety-critical roles (e.g., air traffic control, military command and control). We examined the factors affecting these processes using a visual change detection task that included representative tactical displays. SA was assessed by asking novice observers to detect changes to a tactical display. Metacognitive monitoring was assessed by asking observers to estimate the probability that they would correctly detect a change, either after study of the display and before the change (judgement of learning; JOL) or after the change and detection response (judgement of performance; JOP). In Experiment 1, observers failed to detect some changes to the display, indicating imperfect SA, but JOPs were reasonably well calibrated to objective performance. Experiment 2 examined JOLs and JOPs in two task contexts: with study-time limits imposed by the task or with self-pacing to meet specified performance targets. JOPs were well calibrated in both conditions as were JOLs for high performance targets. In summary, observers had limited SA, but good insight about their performance and learning for high performance targets and allocated study time appropriately. PMID:28915244
ERIC Educational Resources Information Center
Coffman, B. A.; Trumbo, M. C.; Flores, R. A.; Garcia, C. M.; van der Merwe, A. J.; Wassermann, E. M.; Weisend, M. P.; Clark, V. P.
2012-01-01
We have previously found that transcranial direct current stimulation (tDCS) over right inferior frontal cortex (RIFC) enhances performance during learning of a difficult visual target detection task (Clark et al., 2012). In order to examine the cognitive mechanisms of tDCS that lead to enhanced performance, here we analyzed its differential…
Visual encoding and fixation target selection in free viewing: presaccadic brain potentials
Nikolaev, Andrey R.; Jurica, Peter; Nakatani, Chie; Plomp, Gijs; van Leeuwen, Cees
2013-01-01
In scrutinizing a scene, the eyes alternate between fixations and saccades. During a fixation, two component processes can be distinguished: visual encoding and selection of the next fixation target. We aimed to distinguish the neural correlates of these processes in the electrical brain activity prior to a saccade onset. Participants viewed color photographs of natural scenes, in preparation for a change detection task. Then, for each participant and each scene we computed an image heat map, with temperature representing the duration and density of fixations. The temperature difference between the start and end points of saccades was taken as a measure of the expected task-relevance of the information concentrated in specific regions of a scene. Visual encoding was evaluated according to whether subsequent change was correctly detected. Saccades with larger temperature difference were more likely to be followed by correct detection than ones with smaller temperature differences. The amplitude of presaccadic activity over anterior brain areas was larger for correct detection than for detection failure. This difference was observed for short “scrutinizing” but not for long “explorative” saccades, suggesting that presaccadic activity reflects top-down saccade guidance. Thus, successful encoding requires local scanning of scene regions which are expected to be task-relevant. Next, we evaluated fixation target selection. Saccades “moving up” in temperature were preceded by presaccadic activity of higher amplitude than those “moving down”. This finding suggests that presaccadic activity reflects attention deployed to the following fixation location. Our findings illustrate how presaccadic activity can elucidate concurrent brain processes related to the immediate goal of planning the next saccade and the larger-scale goal of constructing a robust representation of the visual scene. PMID:23818877
Figure ground discrimination in age-related macular degeneration.
Tran, Thi Ha Chau; Guyader, Nathalie; Guerin, Anne; Despretz, Pascal; Boucart, Muriel
2011-03-01
To investigate impairment in discriminating a figure from its background and to study its relation to visual acuity and lesion size in patients with neovascular age-related macular degeneration (AMD). Seventeen patients with neovascular AMD and visual acuity <20/50 were included. Seventeen age-matched healthy subjects participated as controls. Complete ophthalmologic examination was performed on all participants. The stimuli were photographs of scenes containing animals (targets) or other objects (distractors), displayed on a computer monitor screen. Performance was compared in four background conditions: the target in the natural scene; the target isolated on a white background; the target separated by a white space from a structured scene; the target separated by a white space from a nonstructured, shapeless background. Target discriminability (d') was recorded. Performance was lower for patients than for controls. For the patients, it was easier to detect the target when it was separated from its background (under isolated, structured, and nonstructured conditions) than it was when located in a scene. Performance was improved in patients with increasing exposure time but remained lower in controls. Correlations were found between visual acuity, lesion size, and sensitivity for patients. Figure/ground segregation is impaired in patients with AMD. A white space surrounding an object is sufficient to improve the object's detection and to facilitate figure/ground segregation. These results may have practical applications to the rehabilitation of the environment in patients with AMD.
Incidental Auditory Category Learning
Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.
2015-01-01
Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588
Stochastic resonance in attention control
NASA Astrophysics Data System (ADS)
Kitajo, K.; Yamanaka, K.; Ward, L. M.; Yamamoto, Y.
2006-12-01
We investigated the beneficial role of noise in a human higher brain function, namely visual attention control. We asked subjects to detect a weak gray-level target inside a marker box either in the left or the right visual field. Signal detection performance was optimized by presenting a low level of randomly flickering gray-level noise between and outside the two possible target locations. Further, we found that an increase in eye movement (saccade) rate helped to compensate for the usual deterioration in detection performance at higher noise levels. To our knowledge, this is the first experimental evidence that noise can optimize a higher brain function which involves distinct brain regions above the level of primary sensory systems -- switching behavior between multi-stable attention states -- via the mechanism of stochastic resonance.
Recent advances in targeted endoscopic imaging: Early detection of gastrointestinal neoplasms
Kwon, Yong-Soo; Cho, Young-Seok; Yoon, Tae-Jong; Kim, Ho-Shik; Choi, Myung-Gyu
2012-01-01
Molecular imaging has emerged as a new discipline in gastrointestinal endoscopy. This technology encompasses modalities that can visualize disease-specific morphological or functional tissue changes based on the molecular signature of individual cells. Molecular imaging has several advantages including minimal damage to tissues, repetitive visualization, and utility for conducting quantitative analyses. Advancements in basic science coupled with endoscopy have made early detection of gastrointestinal cancer possible. Molecular imaging during gastrointestinal endoscopy requires the development of safe biomarkers and exogenous probes to detect molecular changes in cells with high specificity anda high signal-to-background ratio. Additionally, a high-resolution endoscope with an accurate wide-field viewing capability must be developed. Targeted endoscopic imaging is expected to improve early diagnosis and individual therapy of gastrointestinal cancer. PMID:22442742
Krummenacher, Joseph; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas
2009-03-01
Two experiments compared reaction times (RTs) in visual search for singleton feature targets defined, variably across trials, in either the color or the orientation dimension. Experiment 1 required observers to simply discern target presence versus absence (simple-detection task); Experiment 2 required them to respond to a detection-irrelevant form attribute of the target (compound-search task). Experiment 1 revealed a marked dimensional intertrial effect of 34 ms for an target defined in a changed versus a repeated dimension, and an intertrial target distance effect, with an 4-ms increase in RTs (per unit of distance) as the separation of the current relative to the preceding target increased. Conversely, in Experiment 2, the dimension change effect was markedly reduced (11 ms), while the intertrial target distance effect was markedly increased (11 ms per unit of distance). The results suggest that dimension change/repetition effects are modulated by the amount of attentional focusing required by the task, with space-based attention altering the integration of dimension-specific feature contrast signals at the level of the overall-saliency map.
NASA Astrophysics Data System (ADS)
Tang, Feng; Pang, Dai-Wen; Chen, Zhi; Shao, Jian-Bo; Xiong, Ling-Hong; Xiang, Yan-Ping; Xiong, Yan; Wu, Kai; Ai, Hong-Wu; Zhang, Hui; Zheng, Xiao-Li; Lv, Jing-Rui; Liu, Wei-Yong; Hu, Hong-Bing; Mei, Hong; Zhang, Zhen; Sun, Hong; Xiang, Yun; Sun, Zi-Yong
2016-02-01
It is a great challenge in nanotechnology for fluorescent nanobioprobes to be applied to visually detect and directly isolate pathogens in situ. A novel and visual immunosensor technique for efficient detection and isolation of Salmonella was established here by applying fluorescent nanobioprobes on a specially-designed cellulose-based swab (a solid-phase enrichment system). The selective and chromogenic medium used on this swab can achieve the ultrasensitive amplification of target bacteria and form chromogenic colonies in situ based on a simple biochemical reaction. More importantly, because this swab can serve as an attachment site for the targeted pathogens to immobilize and immunologically capture nanobioprobes, our mAb-conjugated QD bioprobes were successfully applied on the solid-phase enrichment system to capture the fluorescence of targeted colonies under a designed excitation light instrument based on blue light-emitting diodes combined with stereomicroscopy or laser scanning confocal microscopy. Compared with the traditional methods using 4-7 days to isolate Salmonella from the bacterial mixture, this method took only 2 days to do this, and the process of initial screening and preliminary diagnosis can be completed in only one and a half days. Furthermore, the limit of detection can reach as low as 101 cells per mL Salmonella on the background of 105 cells per mL non-Salmonella (Escherichia coli, Proteus mirabilis or Citrobacter freundii, respectively) in experimental samples, and even in human anal ones. The visual and efficient immunosensor technique may be proved to be a favorable alternative for screening and isolating Salmonella in a large number of samples related to public health surveillance.It is a great challenge in nanotechnology for fluorescent nanobioprobes to be applied to visually detect and directly isolate pathogens in situ. A novel and visual immunosensor technique for efficient detection and isolation of Salmonella was established here by applying fluorescent nanobioprobes on a specially-designed cellulose-based swab (a solid-phase enrichment system). The selective and chromogenic medium used on this swab can achieve the ultrasensitive amplification of target bacteria and form chromogenic colonies in situ based on a simple biochemical reaction. More importantly, because this swab can serve as an attachment site for the targeted pathogens to immobilize and immunologically capture nanobioprobes, our mAb-conjugated QD bioprobes were successfully applied on the solid-phase enrichment system to capture the fluorescence of targeted colonies under a designed excitation light instrument based on blue light-emitting diodes combined with stereomicroscopy or laser scanning confocal microscopy. Compared with the traditional methods using 4-7 days to isolate Salmonella from the bacterial mixture, this method took only 2 days to do this, and the process of initial screening and preliminary diagnosis can be completed in only one and a half days. Furthermore, the limit of detection can reach as low as 101 cells per mL Salmonella on the background of 105 cells per mL non-Salmonella (Escherichia coli, Proteus mirabilis or Citrobacter freundii, respectively) in experimental samples, and even in human anal ones. The visual and efficient immunosensor technique may be proved to be a favorable alternative for screening and isolating Salmonella in a large number of samples related to public health surveillance. Electronic supplementary information (ESI) available: One additional figure (Fig. S1), two additional tables (Tables S1 and S2) and additional information. See DOI: 10.1039/c5nr07424j
NASA Technical Reports Server (NTRS)
Eckstein, M. P.; Thomas, J. P.; Palmer, J.; Shimozaki, S. S.
2000-01-01
Recently, quantitative models based on signal detection theory have been successfully applied to the prediction of human accuracy in visual search for a target that differs from distractors along a single attribute (feature search). The present paper extends these models for visual search accuracy to multidimensional search displays in which the target differs from the distractors along more than one feature dimension (conjunction, disjunction, and triple conjunction displays). The model assumes that each element in the display elicits a noisy representation for each of the relevant feature dimensions. The observer combines the representations across feature dimensions to obtain a single decision variable, and the stimulus with the maximum value determines the response. The model accurately predicts human experimental data on visual search accuracy in conjunctions and disjunctions of contrast and orientation. The model accounts for performance degradation without resorting to a limited-capacity spatially localized and temporally serial mechanism by which to bind information across feature dimensions.
Short-term saccadic adaptation in the macaque monkey: a binocular mechanism
Schultz, K. P.
2013-01-01
Saccadic eye movements are rapid transfers of gaze between objects of interest. Their duration is too short for the visual system to be able to follow their progress in time. Adaptive mechanisms constantly recalibrate the saccadic responses by detecting how close the landings are to the selected targets. The double-step saccadic paradigm is a common method to simulate alterations in saccadic gain. While the subject is responding to a first target shift, a second shift is introduced in the middle of this movement, which masks it from visual detection. The error in landing introduced by the second shift is interpreted by the brain as an error in the programming of the initial response, with gradual gain changes aimed at compensating the apparent sensorimotor mismatch. A second shift applied dichoptically to only one eye introduces disconjugate landing errors between the two eyes. A monocular adaptive system would independently modify only the gain of the eye exposed to the second shift in order to reestablish binocular alignment. Our results support a binocular mechanism. A version-based saccadic adaptive process detects postsaccadic version errors and generates compensatory conjugate gain alterations. A vergence-based saccadic adaptive process detects postsaccadic disparity errors and generates corrective nonvisual disparity signals that are sent to the vergence system to regain binocularity. This results in striking dynamical similarities between visually driven combined saccade-vergence gaze transfers, where the disparity is given by the visual targets, and the double-step adaptive disconjugate responses, where an adaptive disparity signal is generated internally by the saccadic system. PMID:23076111
Spatial interactions reveal inhibitory cortical networks in human amblyopia.
Wong, Erwin H; Levi, Dennis M; McGraw, Paul V
2005-10-01
Humans with amblyopia have a well-documented loss of sensitivity for first-order, or luminance defined, visual information. Recent studies show that they also display a specific loss of sensitivity for second-order, or contrast defined, visual information; a type of image structure encoded by neurons found predominantly in visual area A18/V2. In the present study, we investigate whether amblyopia disrupts the normal architecture of spatial interactions in V2 by determining the contrast detection threshold of a second-order target in the presence of second-order flanking stimuli. Adjacent flanks facilitated second-order detectability in normal observers. However, in marked contrast, they suppressed detection in each eye of the majority of amblyopic observers. Furthermore, strabismic observers with no loss of visual acuity show a similar pattern of detection suppression. We speculate that amblyopia results in predominantly inhibitory cortical interactions between second-order neurons.
Making perceptual learning practical to improve visual functions.
Polat, Uri
2009-10-01
Task-specific improvement in performance after training is well established. The finding that learning is stimulus-specific and does not transfer well between different stimuli, between stimulus locations in the visual field, or between the two eyes has been used to support the notion that neurons or assemblies of neurons are modified at the earliest stage of cortical processing. However, a debate regarding the proposed mechanism underlying perceptual learning is an ongoing issue. Nevertheless, generalization of a trained task to other functions is an important key, for both understanding the neural mechanisms and the practical value of the training. This manuscript describes a structured perceptual learning method that previously used (amblyopia, myopia) and a novel technique and results that were applied for presbyopia. In general, subjects were trained for contrast detection of Gabor targets under lateral masking conditions. Training improved contrast sensitivity and diminished the lateral suppression when it existed (amblyopia). The improvement was transferred to unrelated functions such as visual acuity. The new results of presbyopia show substantial improvement of the spatial and temporal contrast sensitivity, leading to improved processing speed of target detection as well as reaction time. Consequently, the subjects, who were able to eliminate the need for reading glasses, benefited. Thus, here we show that the transfer of functions indicates that the specificity of improvement in the trained task can be generalized by repetitive practice of target detection, covering a sufficient range of spatial frequencies and orientations, leading to an improvement in unrelated visual functions. Thus, perceptual learning can be a practical method to improve visual functions in people with impaired or blurred vision.
The Relationship Between Online Visual Representation of a Scene and Long-Term Scene Memory
ERIC Educational Resources Information Center
Hollingworth, Andrew
2005-01-01
In 3 experiments the author investigated the relationship between the online visual representation of natural scenes and long-term visual memory. In a change detection task, a target object either changed or remained the same from an initial image of a natural scene to a test image. Two types of changes were possible: rotation in depth, or…
Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis
ERIC Educational Resources Information Center
Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.
2010-01-01
We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…
The Visual Hemifield Asymmetry in the Spatial Blink during Singleton Search and Feature Search
ERIC Educational Resources Information Center
Burnham, Bryan R.; Rozell, Cassandra A.; Kasper, Alex; Bianco, Nicole E.; Delliturri, Antony
2011-01-01
The present study examined a visual field asymmetry in the contingent capture of attention that was previously observed by Du and Abrams (2010). In our first experiment, color singleton distractors that matched the color of a to-be-detected target produced a stronger capture of attention when they appeared in the left visual hemifield than in the…
Impact of age-related macular degeneration on object searches in realistic panoramic scenes.
Thibaut, Miguel; Tran, Thi-Ha-Chau; Szaffarczyk, Sebastien; Boucart, Muriel
2018-05-01
This study investigated whether realistic immersive conditions with dynamic indoor scenes presented on a large, hemispheric panoramic screen covering 180° of the visual field improved the visual search abilities of participants with age-related macular degeneration (AMD). Twenty-one participants with AMD, 16 age-matched controls and 16 young observers were included. Realistic indoor scenes were presented on a panoramic five metre diameter screen. Twelve different objects were used as targets. The participants were asked to search for a target object, shown on paper before each trial, within a room composed of various objects. A joystick was used for navigation within the scene views. A target object was present in 24 trials and absent in 24 trials. The percentage of correct detection of the target, the percentage of false alarms (that is, the detection of the target when it was absent), the number of scene views explored and the search time were measured. The search time was slower for participants with AMD than for the age-matched controls, who in turn were slower than the young participants. The participants with AMD were able to accomplish the task with a performance of 75 per cent correct detections. This was slightly lower than older controls (79.2 per cent) while young controls were at ceiling (91.7 per cent). Errors were mainly due to false alarms resulting from confusion between the target object and another object present in the scene in the target-absent trials. The outcomes of the present study indicate that, under realistic conditions, although slower than age-matched, normally sighted controls, participants with AMD were able to accomplish visual searches of objects with high accuracy. © 2017 Optometry Australia.
Real-time classification of vehicles by type within infrared imagery
NASA Astrophysics Data System (ADS)
Kundegorski, Mikolaj E.; Akçay, Samet; Payen de La Garanderie, Grégoire; Breckon, Toby P.
2016-10-01
Real-time classification of vehicles into sub-category types poses a significant challenge within infra-red imagery due to the high levels of intra-class variation in thermal vehicle signatures caused by aspects of design, current operating duration and ambient thermal conditions. Despite these challenges, infra-red sensing offers significant generalized target object detection advantages in terms of all-weather operation and invariance to visual camouflage techniques. This work investigates the accuracy of a number of real-time object classification approaches for this task within the wider context of an existing initial object detection and tracking framework. Specifically we evaluate the use of traditional feature-driven bag of visual words and histogram of oriented gradient classification approaches against modern convolutional neural network architectures. Furthermore, we use classical photogrammetry, within the context of current target detection and classification techniques, as a means of approximating 3D target position within the scene based on this vehicle type classification. Based on photogrammetric estimation of target position, we then illustrate the use of regular Kalman filter based tracking operating on actual 3D vehicle trajectories. Results are presented using a conventional thermal-band infra-red (IR) sensor arrangement where targets are tracked over a range of evaluation scenarios.
Seeing visual word forms: spatial summation, eccentricity and spatial configuration.
Kao, Chien-Hui; Chen, Chien-Chung
2012-06-01
We investigated observers' performance in detecting and discriminating visual word forms as a function of target size and retinal eccentricity. The contrast threshold of visual words was measured with a spatial two-alternative forced-choice paradigm and a PSI adaptive method. The observers were to indicate which of two sides contained a stimulus in the detection task, and which contained a real character (as opposed to a pseudo- or non-character) in the discrimination task. When the target size was sufficiently small, the detection threshold of a character decreased as its size increased, with a slope of -1/2 on log-log coordinates, up to a critical size at all eccentricities and for all stimulus types. The discrimination threshold decreased with target size with a slope of -1 up to a critical size that was dependent on stimulus type and eccentricity. Beyond that size, the threshold decreased with a slope of -1/2 on log-log coordinates before leveling out. The data was well fit by a spatial summation model that contains local receptive fields (RFs) and a summation across these filters within an attention window. Our result implies that detection is mediated by local RFs smaller than any tested stimuli and thus detection performance is dominated by summation across receptive fields. On the other hand, discrimination is dominated by a summation within a local RF in the fovea but a cross RF summation in the periphery. Copyright © 2012 Elsevier Ltd. All rights reserved.
Loomis, Jack M; Klatzky, Roberta L; McHugh, Brendan; Giudice, Nicholas A
2012-08-01
Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.
ERIC Educational Resources Information Center
Becker, D. Vaughn; Anderson, Uriah S.; Mortensen, Chad R.; Neufeld, Samantha L.; Neel, Rebecca
2011-01-01
Is it easier to detect angry or happy facial expressions in crowds of faces? The present studies used several variations of the visual search task to assess whether people selectively attend to expressive faces. Contrary to widely cited studies (e.g., Ohman, Lundqvist, & Esteves, 2001) that suggest angry faces "pop out" of crowds, our review of…
Beesley, Tom; Hanafi, Gunadi; Vadillo, Miguel A; Shanks, David R; Livesey, Evan J
2018-05-01
Two experiments examined biases in selective attention during contextual cuing of visual search. When participants were instructed to search for a target of a particular color, overt attention (as measured by the location of fixations) was biased strongly toward distractors presented in that same color. However, when participants searched for targets that could be presented in 1 of 2 possible colors, overt attention was not biased between the different distractors, regardless of whether these distractors predicted the location of the target (repeating) or did not (randomly arranged). These data suggest that selective attention in visual search is guided only by the demands of the target detection task (the attentional set) and not by the predictive validity of the distractor elements. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
ERIC Educational Resources Information Center
Dalvit, Silvia; Eimer, Martin
2011-01-01
Previous research has shown that the detection of a visual target can be guided not only by the temporal integration of two percepts, but also by integrating a percept and an image held in working memory. Behavioral and event-related brain potential (ERP) measures were obtained in a target detection task that required temporal integration of 2…
Barua, Animesh; Yellapa, Aparna; Bahr, Janice M; Machado, Sergio A; Bitterman, Pincas; Basu, Sanjib; Sharma, Sameer; Abramowicz, Jacques S
2015-07-01
Tumor-associated neoangiogenesis (TAN) is an early event in ovarian cancer (OVCA) development. Increased expression of vascular endothelial growth factor receptor 2 (VEGFR2) by TAN vessels presents a potential target for early detection by ultrasound imaging. The goal of this study was to examine the suitability of VEGFR2-targeted ultrasound contrast agents in detecting spontaneous OVCA in laying hens. Effects of VEGFR2-targeted contrast agents in enhancing the intensity of ultrasound imaging from spontaneous ovarian tumors in hens were examined in a cross-sectional study. Enhancement in the intensity of ultrasound imaging was determined before and after injection of VEGFR2-targeted contrast agents. All ultrasound images were digitally stored and analyzed off-line. Following scanning, ovarian tissues were collected and processed for histology and detection of VEGFR2-expressing microvessels. Enhancement in visualization of ovarian morphology was detected by gray-scale imaging following injection of VEGFR2-targeted contrast agents. Compared with pre-contrast, contrast imaging enhanced the intensities of ultrasound imaging significantly (p < 0.0001) irrespective of the pathological status of ovaries. In contrast to normal hens, the intensity of ultrasound imaging was significantly (p < 0.0001) higher in hens with early stage OVCA and increased further in hens with late stage OVCA. Higher intensities of ultrasound imaging in hens with OVCA were positively correlated with increased (p < 0.0001) frequencies of VEGFR2-expressing microvessels. The results of this study suggest that VEGFR2-targeted contrast agents enhance the visualization of spontaneous ovarian tumors in hens at early and late stages of OVCA. The laying hen may be a suitable model to test new imaging agents and develop targeted therapeutics. © The Author(s) 2014.
Applying the log-normal distribution to target detection
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
1992-09-01
Holst and Pickard experimentally determined that MRT responses tend to follow a log-normal distribution. The log normal distribution appeared reasonable because nearly all visual psychological data is plotted on a logarithmic scale. It has the additional advantage that it is bounded to positive values; an important consideration since probability of detection is often plotted in linear coordinates. Review of published data suggests that the log-normal distribution may have universal applicability. Specifically, the log-normal distribution obtained from MRT tests appears to fit the target transfer function and the probability of detection of rectangular targets.
Lee, Kyoung-Min; Ahn, Kyung-Ha; Keller, Edward L.
2012-01-01
The frontal eye fields (FEF), originally identified as an oculomotor cortex, have also been implicated in perceptual functions, such as constructing a visual saliency map and shifting visual attention. Further dissecting the area’s role in the transformation from visual input to oculomotor command has been difficult because of spatial confounding between stimuli and responses and consequently between intermediate cognitive processes, such as attention shift and saccade preparation. Here we developed two tasks in which the visual stimulus and the saccade response were dissociated in space (the extended memory-guided saccade task), and bottom-up attention shift and saccade target selection were independent (the four-alternative delayed saccade task). Reversible inactivation of the FEF in rhesus monkeys disrupted, as expected, contralateral memory-guided saccades, but visual detection was demonstrated to be intact at the same field. Moreover, saccade behavior was impaired when a bottom-up shift of attention was not a prerequisite for saccade target selection, indicating that the inactivation effect was independent of the previously reported dysfunctions in bottom-up attention control. These findings underscore the motor aspect of the area’s functions, especially in situations where saccades are generated by internal cognitive processes, including visual short-term memory and long-term associative memory. PMID:22761923
Lee, Kyoung-Min; Ahn, Kyung-Ha; Keller, Edward L
2012-01-01
The frontal eye fields (FEF), originally identified as an oculomotor cortex, have also been implicated in perceptual functions, such as constructing a visual saliency map and shifting visual attention. Further dissecting the area's role in the transformation from visual input to oculomotor command has been difficult because of spatial confounding between stimuli and responses and consequently between intermediate cognitive processes, such as attention shift and saccade preparation. Here we developed two tasks in which the visual stimulus and the saccade response were dissociated in space (the extended memory-guided saccade task), and bottom-up attention shift and saccade target selection were independent (the four-alternative delayed saccade task). Reversible inactivation of the FEF in rhesus monkeys disrupted, as expected, contralateral memory-guided saccades, but visual detection was demonstrated to be intact at the same field. Moreover, saccade behavior was impaired when a bottom-up shift of attention was not a prerequisite for saccade target selection, indicating that the inactivation effect was independent of the previously reported dysfunctions in bottom-up attention control. These findings underscore the motor aspect of the area's functions, especially in situations where saccades are generated by internal cognitive processes, including visual short-term memory and long-term associative memory.
Visuotactile motion congruence enhances gamma-band activity in visual and somatosensory cortices.
Krebber, Martin; Harwood, James; Spitzer, Bernhard; Keil, Julian; Senkowski, Daniel
2015-08-15
When touching and viewing a moving surface our visual and somatosensory systems receive congruent spatiotemporal input. Behavioral studies have shown that motion congruence facilitates interplay between visual and tactile stimuli, but the neural mechanisms underlying this interplay are not well understood. Neural oscillations play a role in motion processing and multisensory integration. They may also be crucial for visuotactile motion processing. In this electroencephalography study, we applied linear beamforming to examine the impact of visuotactile motion congruence on beta and gamma band activity (GBA) in visual and somatosensory cortices. Visual and tactile inputs comprised of gratings that moved either in the same or different directions. Participants performed a target detection task that was unrelated to motion congruence. While there were no effects in the beta band (13-21Hz), the power of GBA (50-80Hz) in visual and somatosensory cortices was larger for congruent compared with incongruent motion stimuli. This suggests enhanced bottom-up multisensory processing when visual and tactile gratings moved in the same direction. Supporting its behavioral relevance, GBA was correlated with shorter reaction times in the target detection task. We conclude that motion congruence plays an important role for the integrative processing of visuotactile stimuli in sensory cortices, as reflected by oscillatory responses in the gamma band. Copyright © 2015 Elsevier Inc. All rights reserved.
A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection.
Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing
2015-06-30
Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC.
A Novel Audiovisual Brain-Computer Interface and Its Application in Awareness Detection
Wang, Fei; He, Yanbin; Pan, Jiahui; Xie, Qiuyou; Yu, Ronghao; Zhang, Rui; Li, Yuanqing
2015-01-01
Currently, detecting awareness in patients with disorders of consciousness (DOC) is a challenging task, which is commonly addressed through behavioral observation scales such as the JFK Coma Recovery Scale-Revised. Brain-computer interfaces (BCIs) provide an alternative approach to detect awareness in patients with DOC. However, these patients have a much lower capability of using BCIs compared to healthy individuals. This study proposed a novel BCI using temporally, spatially, and semantically congruent audiovisual stimuli involving numbers (i.e., visual and spoken numbers). Subjects were instructed to selectively attend to the target stimuli cued by instruction. Ten healthy subjects first participated in the experiment to evaluate the system. The results indicated that the audiovisual BCI system outperformed auditory-only and visual-only systems. Through event-related potential analysis, we observed audiovisual integration effects for target stimuli, which enhanced the discriminability between brain responses for target and nontarget stimuli and thus improved the performance of the audiovisual BCI. This system was then applied to detect the awareness of seven DOC patients, five of whom exhibited command following as well as number recognition. Thus, this audiovisual BCI system may be used as a supportive bedside tool for awareness detection in patients with DOC. PMID:26123281
A new method for detecting small and dim targets in starry background
NASA Astrophysics Data System (ADS)
Yao, Rui; Zhang, Yanning; Jiang, Lei
2011-08-01
Small visible optical space targets detection is one of the key issues in the research of long-range early warning and space debris surveillance. The SNR(Signal to Noise Ratio) of the target is very low because of the self influence of image device. Random noise and background movement also increase the difficulty of target detection. In order to detect small visible optical space targets effectively and rapidly, we bring up a novel detecting method based on statistic theory. Firstly, we get a reasonable statistical model of visible optical space image. Secondly, we extract SIFT(Scale-Invariant Feature Transform) feature of the image frames, and calculate the transform relationship, then use the transform relationship to compensate whole visual field's movement. Thirdly, the influence of star was wiped off by using interframe difference method. We find segmentation threshold to differentiate candidate targets and noise by using OTSU method. Finally, we calculate statistical quantity to judge whether there is the target for every pixel position in the image. Theory analysis shows the relationship of false alarm probability and detection probability at different SNR. The experiment result shows that this method could detect target efficiently, even the target passing through stars.
All I saw was the cake. Hunger effects on attentional capture by visual food cues.
Piech, Richard M; Pastorino, Michael T; Zald, David H
2010-06-01
While effects of hunger on motivation and food reward value are well-established, far less is known about the effects of hunger on cognitive processes. Here, we deployed the emotional blink of attention paradigm to investigate the impact of visual food cues on attentional capture under conditions of hunger and satiety. Participants were asked to detect targets which appeared in a rapid visual stream after different types of task irrelevant distractors. We observed that food stimuli acquired increased power to capture attention and prevent target detection when participants were hungry. This occurred despite monetary incentives to perform well. Our findings suggest an attentional mechanism through which hunger heightens perception of food cues. As an objective behavioral marker of the attentional sensitivity to food cues, the emotional attentional blink paradigm may provide a useful technique for studying individual differences, and state manipulations in the sensitivity to food cues. Published by Elsevier Ltd.
Selective attention in anxiety: distraction and enhancement in visual search.
Rinck, Mike; Becker, Eni S; Kellermann, Jana; Roth, Walton T
2003-01-01
According to cognitive models of anxiety, anxiety patients exhibit an attentional bias towards threat, manifested as greater distractibility by threat stimuli and enhanced detection of them. Both phenomena were studied in two experiments, using a modified visual search task, in which participants were asked to find single target words (GAD-related, speech-related, neutral, or positive) hidden in matrices made up of distractor words (also GAD-related, speech-related, neutral, or positive). Generalized anxiety disorder (GAD) patients, social phobia (SP) patients afraid of giving speeches, and healthy controls participated in the visual search task. GAD patients were slowed by GAD-related distractor words but did not show statistically reliable evidence of enhanced detection of GAD-related target words. SP patients showed neither distraction nor enhancement effects. These results extend previous findings of attentional biases observed with other experimental paradigms. Copyright 2003 Wiley-Liss, Inc.
Age-Related Changes in the Ability to Switch between Temporal and Spatial Attention.
Callaghan, Eleanor; Holland, Carol; Kessler, Klaus
2017-01-01
Background : Identifying age-related changes in cognition that contribute towards reduced driving performance is important for the development of interventions to improve older adults' driving and prolong the time that they can continue to drive. While driving, one is often required to switch from attending to events changing in time, to distribute attention spatially. Although there is extensive research into both spatial attention and temporal attention and how these change with age, the literature on switching between these modalities of attention is limited within any age group. Methods : Age groups (21-30, 40-49, 50-59, 60-69 and 70+ years) were compared on their ability to switch between detecting a target in a rapid serial visual presentation (RSVP) stream and detecting a target in a visual search display. To manipulate the cost of switching, the target in the RSVP stream was either the first item in the stream (Target 1st), towards the end of the stream (Target Mid), or absent from the stream (Distractor Only). Visual search response times and accuracy were recorded. Target 1st trials behaved as no-switch trials, as attending to the remaining stream was not necessary. Target Mid and Distractor Only trials behaved as switch trials, as attending to the stream to the end was required. Results : Visual search response times (RTs) were longer on "Target Mid" and "Distractor Only" trials in comparison to "Target 1st" trials, reflecting switch-costs. Larger switch-costs were found in both the 40-49 and 60-69 years group in comparison to the 21-30 years group when switching from the Target Mid condition. Discussion : Findings warrant further exploration as to whether there are age-related changes in the ability to switch between these modalities of attention while driving. If older adults display poor performance when switching between temporal and spatial attention while driving, then the development of an intervention to preserve and improve this ability would be beneficial.
A color fusion method of infrared and low-light-level images based on visual perception
NASA Astrophysics Data System (ADS)
Han, Jing; Yan, Minmin; Zhang, Yi; Bai, Lianfa
2014-11-01
The color fusion images can be obtained through the fusion of infrared and low-light-level images, which will contain both the information of the two. The fusion images can help observers to understand the multichannel images comprehensively. However, simple fusion may lose the target information due to inconspicuous targets in long-distance infrared and low-light-level images; and if targets extraction is adopted blindly, the perception of the scene information will be affected seriously. To solve this problem, a new fusion method based on visual perception is proposed in this paper. The extraction of the visual targets ("what" information) and parallel processing mechanism are applied in traditional color fusion methods. The infrared and low-light-level color fusion images are achieved based on efficient typical targets learning. Experimental results show the effectiveness of the proposed method. The fusion images achieved by our algorithm can not only improve the detection rate of targets, but also get rich natural information of the scenes.
Aphasic Patients Exhibit a Reversal of Hemispheric Asymmetries in Categorical Color Discrimination
Paluy, Yulia; Gilbert, Aubrey L.; Baldo, Juliana V.; Dronkers, Nina F.; Ivry, Richard B.
2010-01-01
Patients with left hemisphere (LH) or right hemisphere (RH) brain injury due to stroke were tested on a speeded, color discrimination task in which two factors were manipulated: 1) the categorical relationship between the target and the distracters and 2) the visual field in which the target was presented. Similar to controls, the RH patients were faster in detecting targets in the right visual field when the target and distracters had different color names compared to when their names were the same. This effect was absent in the LH patients, consistent with the hypothesis that injury to the left hemisphere handicaps the automatic activation of lexical codes. Moreover, the LH patients showed a reversed effect, such that the advantage of different target-distracter names was now evident for targets in the left visual field. This reversal may suggest a reorganization of the color lexicon in the right hemisphere following left hemisphere brain injury and/or the unmasking of a heightened right hemisphere sensitivity to color categories. PMID:21216454
Two-color mixing for classifying agricultural products for safety and quality
NASA Astrophysics Data System (ADS)
Ding, Fujian; Chen, Yud-Ren; Chao, Kuanglin; Chan, Diane E.
2006-02-01
We show that the chromaticness of the visual signal that results from the two-color mixing achieved through an optically enhanced binocular device is directly related to the band ratio of light intensity at the two selected wavebands. A technique that implements the band-ratio criterion in a visual device by using two-color mixing is presented here. The device will allow inspectors to identify targets visually in accordance with a two-wavelength band ratio. It is a method of inspection by human vision assisted by an optical device, which offers greater flexibility and better cost savings than a multispectral machine vision system that implements the band-ratio criterion. With proper selection of the two narrow wavebands, discrimination by chromaticness that is directly related to the band ratio can work well. An example application of this technique for the inspection of carcasses chickens of afficted with various diseases is given. An optimal pair of wavelengths of 454 and 578 nm was selected to optimize differences in saturation and hue in CIE LUV color space among different types of target. Another example application, for the detection of chilling injury in cucumbers, is given, here the selected wavelength pair was 504 and 652 nm. The novel two-color mixing technique for visual inspection can be included in visual devices for various applications, ranging from target detection to food safety inspection.
Right Hemisphere Specialization for Color Detection
ERIC Educational Resources Information Center
Sasaki, Hitoshi; Morimoto, Akiko; Nishio, Akira; Matsuura, Sumie
2007-01-01
Three experiments were carried out to investigate hemispheric asymmetry in color processing among normal participants. In Experiment 1, it was shown that the reaction times (RTs) of the dominant and non-dominant hands assessed using a visual target presented at the central visual field, were not significantly different. In Experiment 2, RTs of…
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Bai, Shengjian; Xu, Wanying
2014-07-01
Infrared moving target detection is an important part of infrared technology. We introduce a novel infrared small moving target detection method based on tracking interest points under complicated background. Firstly, Difference of Gaussians (DOG) filters are used to detect a group of interest points (including the moving targets). Secondly, a sort of small targets tracking method inspired by Human Visual System (HVS) is used to track these interest points for several frames, and then the correlations between interest points in the first frame and the last frame are obtained. Last, a new clustering method named as R-means is proposed to divide these interest points into two groups according to the correlations, one is target points and another is background points. In experimental results, the target-to-clutter ratio (TCR) and the receiver operating characteristics (ROC) curves are computed experimentally to compare the performances of the proposed method and other five sophisticated methods. From the results, the proposed method shows a better discrimination of targets and clutters and has a lower false alarm rate than the existing moving target detection methods.
Adaptation mechanisms, eccentricity profiles, and clinical implementation of red-on-white perimetry.
Zele, Andrew J; Dang, Trung M; O'Loughlin, Rebecca K; Guymer, Robyn H; Harper, Alex; Vingrys, Algis J
2008-05-01
To determine the visual adaptation and retinal eccentricity profiles for red flickering and static test stimuli and report a clinical implementation of these stimuli in visual perimetry. The adaptation profile for red-on-white perimetry stimuli was measured using a threshold vs. intensity (TvI) paradigm at 0 degree and 12 degrees eccentricity and by comparing the eccentricity-related sensitivity change for red and white, static, and flickering targets in young normal trichromats (n = 5) and a group of dichromats (n = 5). A group of older normal control observers (n = 30) were tested and retinal disease was evaluated in persons having age-related maculopathy (n = 35) and diabetes (n = 12). Adaptation and eccentricity profiles indicate red static and flickering targets are detected by two mechanisms in the paramacular region, and a single mechanism for >5 degrees eccentricity. The group data for the older normal observers has a high level of inter-observer variability with a generalized reduction in sensitivity across the entire visual field. Group data for the participants with age-related maculopathy show reduced sensitivities that were pronounced in the central retina. The group data for the diabetic observers showed sensitivities that were reduced at all eccentricities. The disease-related sensitivity decline was more apparent with red than white stimuli. The adaptation profile and change in sensitivity with retinal eccentricity for the red-on-white perimetric stimuli are consistent with two detection processes. In the macula, the putative detection mechanism is color-opponent with static targets and non-opponent with flickering targets. At peripheral field locations, the putative detection mechanism is non-opponent for both static and flicker targets. The long-wavelength stimuli are less affected by the preretinal absorption common to aging. Red-on-white static and flicker perimetry may be useful for monitoring retinal disease, revealing greater abnormalities compared with conventional white-on-white perimetry, especially in the macula where two detection mechanisms are found.
Onoyama, Haruna; Kamiya, Mako; Kuriki, Yugo; Komatsu, Toru; Abe, Hiroyuki; Tsuji, Yosuke; Yagi, Koichi; Yamagata, Yukinori; Aikou, Susumu; Nishida, Masato; Mori, Kazuhiko; Yamashita, Hiroharu; Fujishiro, Mitsuhiro; Nomura, Sachiyo; Shimizu, Nobuyuki; Fukayama, Masashi; Koike, Kazuhiko; Urano, Yasuteru; Seto, Yasuyuki
2016-01-01
Early detection of esophageal squamous cell carcinoma (ESCC) is an important prognosticator, but is difficult to achieve by conventional endoscopy. Conventional lugol chromoendoscopy and equipment-based image-enhanced endoscopy, such as narrow-band imaging (NBI), have various practical limitations. Since fluorescence-based visualization is considered a promising approach, we aimed to develop an activatable fluorescence probe to visualize ESCCs. First, based on the fact that various aminopeptidase activities are elevated in cancer, we screened freshly resected specimens from patients with a series of aminopeptidase-activatable fluorescence probes. The results indicated that dipeptidylpeptidase IV (DPP-IV) is specifically activated in ESCCs, and would be a suitable molecular target for detection of esophageal cancer. Therefore, we designed, synthesized and characterized a series of DPP-IV-activatable fluorescence probes. When the selected probe was topically sprayed onto endoscopic submucosal dissection (ESD) or surgical specimens, tumors were visualized within 5 min, and when the probe was sprayed on biopsy samples, the sensitivity, specificity and accuracy reached 96.9%, 85.7% and 90.5%. We believe that DPP-IV-targeted activatable fluorescence probes are practically translatable as convenient tools for clinical application to enable rapid and accurate diagnosis of early esophageal cancer during endoscopic or surgical procedures. PMID:27245876
Maekawa, Toru; de Brecht, Matthew; Yamagishi, Noriko
2018-01-01
The study of visual perception has largely been completed without regard to the influence that an individual’s emotional status may have on their performance in visual tasks. However, there is a growing body of evidence to suggest that mood may affect not only creative abilities and interpersonal skills but also the capacity to perform low-level cognitive tasks. Here, we sought to determine whether rudimentary visual search processes are similarly affected by emotion. Specifically, we examined whether an individual’s perceived happiness level affects their ability to detect a target in noise. To do so, we employed pop-out and serial visual search paradigms, implemented using a novel smartphone application that allowed search times and self-rated levels of happiness to be recorded throughout each twenty-four-hour period for two weeks. This experience sampling protocol circumvented the need to alter mood artificially with laboratory-based induction methods. Using our smartphone application, we were able to replicate the classic visual search findings, whereby pop-out search times remained largely unaffected by the number of distractors whereas serial search times increased with increasing number of distractors. While pop-out search times were unaffected by happiness level, serial search times with the maximum numbers of distractors (n = 30) were significantly faster for high happiness levels than low happiness levels (p = 0.02). Our results demonstrate the utility of smartphone applications in assessing ecologically valid measures of human visual performance. We discuss the significance of our findings for the assessment of basic visual functions using search time measures, and for our ability to search effectively for targets in real world settings. PMID:29664952
Maekawa, Toru; Anderson, Stephen J; de Brecht, Matthew; Yamagishi, Noriko
2018-01-01
The study of visual perception has largely been completed without regard to the influence that an individual's emotional status may have on their performance in visual tasks. However, there is a growing body of evidence to suggest that mood may affect not only creative abilities and interpersonal skills but also the capacity to perform low-level cognitive tasks. Here, we sought to determine whether rudimentary visual search processes are similarly affected by emotion. Specifically, we examined whether an individual's perceived happiness level affects their ability to detect a target in noise. To do so, we employed pop-out and serial visual search paradigms, implemented using a novel smartphone application that allowed search times and self-rated levels of happiness to be recorded throughout each twenty-four-hour period for two weeks. This experience sampling protocol circumvented the need to alter mood artificially with laboratory-based induction methods. Using our smartphone application, we were able to replicate the classic visual search findings, whereby pop-out search times remained largely unaffected by the number of distractors whereas serial search times increased with increasing number of distractors. While pop-out search times were unaffected by happiness level, serial search times with the maximum numbers of distractors (n = 30) were significantly faster for high happiness levels than low happiness levels (p = 0.02). Our results demonstrate the utility of smartphone applications in assessing ecologically valid measures of human visual performance. We discuss the significance of our findings for the assessment of basic visual functions using search time measures, and for our ability to search effectively for targets in real world settings.
Malavita, Menaka S; Vidyasagar, Trichur R; McKendrick, Allison M
2017-02-01
The purpose of this study was to study how, in midperipheral vision, aging affects visual processes that interfere with target detection (crowding and surround suppression) and to determine whether the performance on such tasks are related to visuospatial attention as measured by visual search. We investigated the effect of aging on crowding and suppression in detection of a target in peripheral vision, using different types of flanking stimuli. Both thresholds were also obtained while varying the position of the flanker (placed inside or outside of target, relative to fixation). Crowding thresholds were also estimated with spatial uncertainty (jitter). Additionally, we included a visual search task comprising Gabor stimuli to investigate whether performance is related to top-down attention. Twenty young adults (age, 18-32 years; mean age, 26.1 years; 10 males) and 19 older adults (age, 60-74 years; mean age, 70.3 years; 10 males) participated in the study. Older adults showed more surround suppression than the young (F[1,37] = 4.21; P < 0.05), but crowding was unaffected by age. In the younger group, the position of the flanker influenced the strength of crowding, but not the strength of suppression (F[1,39] = 4.11; P < 0.05). Crowding was not affected by spatial jitter of the stimuli. Neither crowding nor surround suppression was predicted by attentional efficiency measured in the visual search task. There was also no significant correlation between crowding and surround suppression. We show that aging does not affect visual crowding but does increase surround suppression of contrast, suggesting that crowding and surround suppression may be distinct visual phenomena. Furthermore, strengths of crowding and surround suppression did not correlate with each other nor could they be predicted by efficiency of visual search.
Visual search in Alzheimer's disease: a deficiency in processing conjunctions of features.
Tales, A; Butler, S R; Fossey, J; Gilchrist, I D; Jones, R W; Troscianko, T
2002-01-01
Human vision often needs to encode multiple characteristics of many elements of the visual field, for example their lightness and orientation. The paradigm of visual search allows a quantitative assessment of the function of the underlying mechanisms. It measures the ability to detect a target element among a set of distractor elements. We asked whether Alzheimer's disease (AD) patients are particularly affected in one type of search, where the target is defined by a conjunction of features (orientation and lightness) and where performance depends on some shifting of attention. Two non-conjunction control conditions were employed. The first was a pre-attentive, single-feature, "pop-out" task, detecting a vertical target among horizontal distractors. The second was a single-feature, partly attentive task in which the target element was slightly larger than the distractors-a "size" task. This was chosen to have a similar level of attentional load as the conjunction task (for the control group), but lacked the conjunction of two features. In an experiment, 15 AD patients were compared to age-matched controls. The results suggested that AD patients have a particular impairment in the conjunction task but not in the single-feature size or pre-attentive tasks. This may imply that AD particularly affects those mechanisms which compare across more than one feature type, and spares the other systems and is not therefore simply an 'attention-related' impairment. Additionally, these findings show a double dissociation with previous data on visual search in Parkinson's disease (PD), suggesting a different effect of these diseases on the visual pathway.
On the relationship between human search strategies, conspicuity, and search performance
NASA Astrophysics Data System (ADS)
Hogervorst, Maarten A.; Bijl, Piet; Toet, Alexander
2005-05-01
We determined the relationship between search performance with a limited field of view (FOV) and several scanning- and scene parameters in human observer experiments. The observers (38 trained army scouts) searched through a large search sector for a target (a camouflaged person) on a heath. From trial to trial the target appeared at a different location. With a joystick the observers scanned through a panoramic image (displayed on a PC-monitor) while the scan path was registered. Four conditions were run differing in sensor type (visual or thermal infrared) and window size (large or small). In conditions with a small window size the zoom option could be used. Detection performance was highly dependent on zoom factor and deteriorated when scan speed increased beyond a threshold value. Moreover, the distribution of scan speeds scales with the threshold speed. This indicates that the observers are aware of their limitations and choose a (near) optimal search strategy. We found no correlation between the fraction of detected targets and overall search time for the individual observers, indicating that both are independent measures of individual search performance. Search performance (fraction detected, total search time, time in view for detection) was found to be strongly related to target conspicuity. Moreover, we found the same relationship between search performance and conspicuity for visual and thermal targets. This indicates that search performance can be predicted directly by conspicuity regardless of the sensor type.
Does working memory load facilitate target detection?
Fruchtman-Steinbok, Tom; Kessler, Yoav
2016-02-01
Previous studies demonstrated that increasing working memory (WM) load delays performance of a concurrent task, by distracting attention and thus interfering with encoding and maintenance processes. The present study used a version of the change detection task with a target detection requirement during the retention interval. In contrast to the above prediction, target detection was faster following a larger set-size, specifically when presented shortly after the memory array (up to 400 ms). The effect of set-size on target detection was also evident when no memory retention was required. The set-size effect was also found using different modalities. Moreover, it was only observed when the memory array was presented simultaneously, but not sequentially. These results were explained by increased phasic alertness exerted by the larger visual display. The present study offers new evidence of ongoing attentional processes in the commonly-used change detection paradigm. Copyright © 2015 Elsevier B.V. All rights reserved.
Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene.
Li, Jun; Mei, Xue; Prokhorov, Danil; Tao, Dacheng
2017-03-01
Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for successful analysis. This paper extends the framework of deep neural networks by accounting for the structural cues in the visual signals. In particular, two kinds of neural networks have been proposed. First, we develop a multitask deep convolutional network, which simultaneously detects the presence of the target and the geometric attributes (location and orientation) of the target with respect to the region of interest. Second, a recurrent neuron layer is adopted for structured visual detection. The recurrent neurons can deal with the spatial distribution of visible cues belonging to an object whose shape or structure is difficult to explicitly define. Both the networks are demonstrated by the practical task of detecting lane boundaries in traffic scenes. The multitask convolutional neural network provides auxiliary geometric information to help the subsequent modeling of the given lane structures. The recurrent neural network automatically detects lane boundaries, including those areas containing no marks, without any explicit prior knowledge or secondary modeling.
Chen, Xiaoyun; Wang, Xiaofu; Jin, Nuo; Zhou, Yu; Huang, Sainan; Miao, Qingmei; Zhu, Qing; Xu, Junfeng
2012-11-07
Genetically modified (GM) rice KMD1, TT51-1, and KF6 are three of the most well known transgenic Bt rice lines in China. A rapid and sensitive molecular assay for risk assessment of GM rice is needed. Polymerase chain reaction (PCR), currently the most common method for detecting genetically modified organisms, requires temperature cycling and relatively complex procedures. Here we developed a visual and rapid loop-mediated isothermal amplification (LAMP) method to amplify three GM rice event-specific junction sequences. Target DNA was amplified and visualized by two indicators (SYBR green or hydroxy naphthol blue [HNB]) within 60 min at an isothermal temperature of 63 °C. Different kinds of plants were selected to ensure the specificity of detection and the results of the non-target samples were negative, indicating that the primer sets for the three GM rice varieties had good levels of specificity. The sensitivity of LAMP, with detection limits at low concentration levels (0.01%−0.005% GM), was 10- to 100-fold greater than that of conventional PCR. Additionally, the LAMP assay coupled with an indicator (SYBR green or HNB) facilitated analysis. These findings revealed that the rapid detection method was suitable as a simple field-based test to determine the status of GM crops.
Vision and foraging in cormorants: more like herons than hawks?
White, Craig R; Day, Norman; Butler, Patrick J; Martin, Graham R
2007-07-25
Great cormorants (Phalacrocorax carbo L.) show the highest known foraging yield for a marine predator and they are often perceived to be in conflict with human economic interests. They are generally regarded as visually-guided, pursuit-dive foragers, so it would be expected that cormorants have excellent vision much like aerial predators, such as hawks which detect and pursue prey from a distance. Indeed cormorant eyes appear to show some specific adaptations to the amphibious life style. They are reported to have a highly pliable lens and powerful intraocular muscles which are thought to accommodate for the loss of corneal refractive power that accompanies immersion and ensures a well focussed image on the retina. However, nothing is known of the visual performance of these birds and how this might influence their prey capture technique. We measured the aquatic visual acuity of great cormorants under a range of viewing conditions (illuminance, target contrast, viewing distance) and found it to be unexpectedly poor. Cormorant visual acuity under a range of viewing conditions is in fact comparable to unaided humans under water, and very inferior to that of aerial predators. We present a prey detectability model based upon the known acuity of cormorants at different illuminances, target contrasts and viewing distances. This shows that cormorants are able to detect individual prey only at close range (less than 1 m). We conclude that cormorants are not the aquatic equivalent of hawks. Their efficient hunting involves the use of specialised foraging techniques which employ brief short-distance pursuit and/or rapid neck extension to capture prey that is visually detected or flushed only at short range. This technique appears to be driven proximately by the cormorant's limited visual capacities, and is analogous to the foraging techniques employed by herons.
Tracking, aiming, and hitting the UAV with ordinary assault rifle
NASA Astrophysics Data System (ADS)
Racek, František; Baláž, Teodor; Krejčí, Jaroslav; Procházka, Stanislav; Macko, Martin
2017-10-01
The usage small-unmanned aerial vehicles (UAVs) is significantly increasing nowadays. They are being used as a carrier of military spy and reconnaissance devices (taking photos, live video streaming and so on), or as a carrier of potentially dangerous cargo (intended for destruction and killing). Both ways of utilizing the UAV cause the necessity to disable it. From the military point of view, to disable the UAV means to bring it down by a weapon of an ordinary soldier that is the assault rifle. This task can be challenging for the soldier because he needs visually detect and identify the target, track the target visually and aim on the target. The final success of the soldier's mission depends not only on the said visual tasks, but also on the properties of the weapon and ammunition. The paper deals with possible methods of prediction of probability of hitting the UAV targets.
Zhang, Xianxia; Xiao, Kunyi; Cheng, Liwei; Chen, Hui; Liu, Baohong; Zhang, Song; Kong, Jilie
2014-06-03
Rapid and efficient detection of cancer cells at their earliest stages is one of the central challenges in cancer diagnostics. We developed a simple, cost-effective, and highly sensitive colorimetric method for visually detecting rare cancer cells based on cell-triggered cyclic enzymatic signal amplification (CTCESA). In the absence of target cells, hairpin aptamer probes (HAPs) and linker DNAs stably coexist in solution, and the linker DNA assembles DNA-AuNPs, producing a purple solution. In the presence of target cells, the specific binding of HAPs to the target cells triggers a conformational switch that results in linker DNA hybridization and cleavage by nicking endonuclease-strand scission cycles. Consequently, the cleaved fragments of linker DNA can no longer assemble into DNA-AuNPs, resulting in a red color. UV-vis spectrometry and photograph analyses demonstrated that this CTCESA-based method exhibited selective and sensitive colorimetric responses to the presence of target CCRF-CEM cells, which could be detected by the naked eye. The linear response for CCRF-CEM cells in a concentration range from 10(2) to 10(4) cells was obtained with a detection limit of 40 cells, which is approximately 20 times lower than the detection limit of normal AuNP-based methods without amplification. Given the high specificity and sensitivity of CTCESA, this colorimetric method provides a sensitive, label-free, and cost-effective approach for early cancer diagnosis and point-to-care applications.
Lin, Zhimin; Zeng, Ying; Tong, Li; Zhang, Hangming; Zhang, Chi
2017-01-01
The application of electroencephalogram (EEG) generated by human viewing images is a new thrust in image retrieval technology. A P300 component in the EEG is induced when the subjects see their point of interest in a target image under the rapid serial visual presentation (RSVP) experimental paradigm. We detected the single-trial P300 component to determine whether a subject was interested in an image. In practice, the latency and amplitude of the P300 component may vary in relation to different experimental parameters, such as target probability and stimulus semantics. Thus, we proposed a novel method, Target Recognition using Image Complexity Priori (TRICP) algorithm, in which the image information is introduced in the calculation of the interest score in the RSVP paradigm. The method combines information from the image and EEG to enhance the accuracy of single-trial P300 detection on the basis of traditional single-trial P300 detection algorithm. We defined an image complexity parameter based on the features of the different layers of a convolution neural network (CNN). We used the TRICP algorithm to compute for the complexity of an image to quantify the effect of different complexity images on the P300 components and training specialty classifier according to the image complexity. We compared TRICP with the HDCA algorithm. Results show that TRICP is significantly higher than the HDCA algorithm (Wilcoxon Sign Rank Test, p<0.05). Thus, the proposed method can be used in other and visual task-related single-trial event-related potential detection. PMID:29283998
Ragan, Eric D.; Bowman, Doug A.; Kopper, Regis; ...
2015-02-13
Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. The paper presents a framework for evaluating the effects of virtual reality fidelity based on an analysis of a simulation’s display, interaction, and scenario components. Following this framework, we conducted a controlled experiment to test the effects of fidelity on training effectiveness for a visual scanning task. The experiment varied the levels of field of view and visual realism during a training phase and then evaluated scanning performance with the simulator’s highestmore » level of fidelity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual realism significantly affected target detection during training; higher field of view led to better performance and higher visual realism worsened performance. Additionally, the level of visual realism during training significantly affected learning of the prescribed visual scanning strategy, providing evidence that high visual realism was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training—evaluation in a more realistic setting may be necessary.« less
Grubert, Anna; Indino, Marcello; Krummenacher, Joseph
2014-01-01
In an experiment involving a total of 124 participants, divided into eight age groups (6-, 8-, 10-, 12-, 14-, 16-, 18-, and 20-year-olds) the development of the processing components underlying visual search for pop-out targets was tracked. Participants indicated the presence or absence of color or orientation feature singleton targets. Observers also solved a detection task, in which they responded to the onset of search arrays. There were two main results. First, analyses of inter-trial effects revealed differences in the search strategies of the 6-year-old participants compared to older age groups. Participants older than 8 years based target detection on feature-less dimensional salience signals (indicated by cross-trial RT costs in target dimension change relative to repetition trials), the 6-year-olds accessed the target feature to make a target present or absent decision (cross-trial RT costs in target feature change relative to feature repetition trials). The result agrees with predictions derived from the Dimension Weighting account and previous investigations of inter-trial effects in adult observers (Müller et al., 1995; Found and Müller, 1996). The results are also in line with theories of cognitive development suggesting that the ability to abstract specific visual features into feature categories is developed after the age of 7 years. Second, overall search RTs decreased with increasing age in a decelerated fashion. RT differences between consecutive age groups can be explained by sensory-motor maturation up to the age of 10 years (as indicated by RTs in the onset detection task). Expedited RTs in older age groups (10-, vs. 12-year-olds; 14- vs. 16-year-olds), but also in the 6- vs. 8-year-olds, are due to the development of search-related (cognitive) processes. Overall, the results suggest that the level of adult performance in visual search for pop-out targets is achieved by the age of 16.
Grubert, Anna; Indino, Marcello; Krummenacher, Joseph
2014-01-01
In an experiment involving a total of 124 participants, divided into eight age groups (6-, 8-, 10-, 12-, 14-, 16-, 18-, and 20-year-olds) the development of the processing components underlying visual search for pop-out targets was tracked. Participants indicated the presence or absence of color or orientation feature singleton targets. Observers also solved a detection task, in which they responded to the onset of search arrays. There were two main results. First, analyses of inter-trial effects revealed differences in the search strategies of the 6-year-old participants compared to older age groups. Participants older than 8 years based target detection on feature-less dimensional salience signals (indicated by cross-trial RT costs in target dimension change relative to repetition trials), the 6-year-olds accessed the target feature to make a target present or absent decision (cross-trial RT costs in target feature change relative to feature repetition trials). The result agrees with predictions derived from the Dimension Weighting account and previous investigations of inter-trial effects in adult observers (Müller et al., 1995; Found and Müller, 1996). The results are also in line with theories of cognitive development suggesting that the ability to abstract specific visual features into feature categories is developed after the age of 7 years. Second, overall search RTs decreased with increasing age in a decelerated fashion. RT differences between consecutive age groups can be explained by sensory-motor maturation up to the age of 10 years (as indicated by RTs in the onset detection task). Expedited RTs in older age groups (10-, vs. 12-year-olds; 14- vs. 16-year-olds), but also in the 6- vs. 8-year-olds, are due to the development of search-related (cognitive) processes. Overall, the results suggest that the level of adult performance in visual search for pop-out targets is achieved by the age of 16. PMID:24910627
Multi-brain fusion and applications to intelligence analysis
NASA Astrophysics Data System (ADS)
Stoica, A.; Matran-Fernandez, A.; Andreou, D.; Poli, R.; Cinel, C.; Iwashita, Y.; Padgett, C.
2013-05-01
In a rapid serial visual presentation (RSVP) images are shown at an extremely rapid pace. Yet, the images can still be parsed by the visual system to some extent. In fact, the detection of specific targets in a stream of pictures triggers a characteristic electroencephalography (EEG) response that can be recognized by a brain-computer interface (BCI) and exploited for automatic target detection. Research funded by DARPA's Neurotechnology for Intelligence Analysts program has achieved speed-ups in sifting through satellite images when adopting this approach. This paper extends the use of BCI technology from individual analysts to collaborative BCIs. We show that the integration of information in EEGs collected from multiple operators results in performance improvements compared to the single-operator case.
Neural basis of superior performance of action videogame players in an attention-demanding task.
Mishra, Jyoti; Zinni, Marla; Bavelier, Daphne; Hillyard, Steven A
2011-01-19
Steady-state visual evoked potentials (SSVEPs) were recorded from action videogame players (VGPs) and from non-videogame players (NVGPs) during an attention-demanding task. Participants were presented with a multi-stimulus display consisting of rapid sequences of alphanumeric stimuli presented at rates of 8.6/12 Hz in the left/right peripheral visual fields, along with a central square at fixation flashing at 5.5 Hz and a letter sequence flashing at 15 Hz at an upper central location. Subjects were cued to attend to one of the peripheral or central stimulus sequences and detect occasional targets. Consistent with previous behavioral studies, VGPs detected targets with greater speed and accuracy than NVGPs. This behavioral advantage was associated with an increased suppression of SSVEP amplitudes to unattended peripheral sequences in VGPs relative to NVGPs, whereas the magnitude of the attended SSVEPs was equivalent in the two groups. Group differences were also observed in the event-related potentials to targets in the alphanumeric sequences, with the target-elicited P300 component being of larger amplitude in VGPS than NVGPs. These electrophysiological findings suggest that the superior target detection capabilities of the VGPs are attributable, at least in part, to enhanced suppression of distracting irrelevant information and more effective perceptual decision processes.
Modulation of neuronal responses during covert search for visual feature conjunctions
Buracas, Giedrius T.; Albright, Thomas D.
2009-01-01
While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions. PMID:19805385
Modulation of neuronal responses during covert search for visual feature conjunctions.
Buracas, Giedrius T; Albright, Thomas D
2009-09-29
While searching for an object in a visual scene, an observer's attentional focus and eye movements are often guided by information about object features and spatial locations. Both spatial and feature-specific attention are known to modulate neuronal responses in visual cortex, but little is known of the dynamics and interplay of these mechanisms as visual search progresses. To address this issue, we recorded from directionally selective cells in visual area MT of monkeys trained to covertly search for targets defined by a unique conjunction of color and motion features and to signal target detection with an eye movement to the putative target. Two patterns of response modulation were observed. One pattern consisted of enhanced responses to targets presented in the receptive field (RF). These modulations occurred at the end-stage of search and were more potent during correct target identification than during erroneous saccades to a distractor in RF, thus suggesting that this modulation is not a mere presaccadic enhancement. A second pattern of modulation was observed when RF stimuli were nontargets that shared a feature with the target. The latter effect was observed during early stages of search and is consistent with a global feature-specific mechanism. This effect often terminated before target identification, thus suggesting that it interacts with spatial attention. This modulation was exhibited not only for motion but also for color cue, although MT neurons are known to be insensitive to color. Such cue-invariant attentional effects may contribute to a feature binding mechanism acting across visual dimensions.
Assistive obstacle detection and navigation devices for vision-impaired users.
Ong, S K; Zhang, J; Nee, A Y C
2013-09-01
Quality of life for the visually impaired is an urgent worldwide issue that needs to be addressed. Obstacle detection is one of the most important navigation tasks for the visually impaired. In this research, a novel range sensor placement scheme is proposed in this paper for the development of obstacle detection devices. Based on this scheme, two prototypes have been developed targeting at different user groups. This paper discusses the design issues, functional modules and the evaluation tests carried out for both prototypes. Implications for Rehabilitation Visual impairment problem is becoming more severe due to the worldwide ageing population. Individuals with visual impairment require assistance from assistive devices in daily navigation tasks. Traditional assistive devices that assist navigation may have certain drawbacks, such as the limited sensing range of a white cane. Obstacle detection devices applying the range sensor technology can identify road conditions with a higher sensing range to notify the users of potential dangers in advance.
NASA Astrophysics Data System (ADS)
Asanuma, Daisuke; Urano, Yasuteru; Nagano, Tetsuo; Hama, Yukihiro; Koyama, Yoshinori; Kobayashi, Hisataka
2009-02-01
One goal of molecular imaging is to establish a widely applicable technique for specific detection of tumors with minimal background. Here, we achieve specific in vivo tumor visualization with a newly-designed "activatable" targeted fluorescence probe. This agent is activated after cellular internalization by sensing the pH change in the lysosome. Novel acidic pH-activatable probes based on the BODIPY fluorophore were synthesized, and then conjugated to a cancer-targeting monoclonal antibody, Trastuzumab, or galactosyl serum albumin (GSA). As proof of concept, ex and in vivo imaging of two different tumor mouse models was performed: HER2-overexpressed lung metastasis tumor with Trastuzumab-pH probe conjugates and lectin-overexpressed i.p. disseminated tumor with GSA-pH probe conjugates. These pH-activatable targeted probes were highly specific for tumors with minimal background signal. Because the acidic pH in lysosomes is maintained by the energy-consuming proton pump, only viable cancer cells were successfully visualized. Furthermore, this strategy was also applied to fluorescence endoscopy in tumor mouse models, resulting in specific visualization of tumors as small as submillimeter in size that could hardly detected by naked eyes because of their poor contrast against normal tissues. The design concept can be widely adapted to cancer-specific cell-surface-targeting molecules that result in cellular internalization.
Effect of inherent location uncertainty on detection of stationary targets in noisy image sequences.
Manjeshwar, R M; Wilson, D L
2001-01-01
The effect of inherent location uncertainty on the detection of stationary targets was determined in noisy image sequences. Targets were thick and thin projected cylinders mimicking arteries, catheters, and guide wires in medical imaging x-ray fluoroscopy. With the use of an adaptive forced-choice method, detection contrast sensitivity (the inverse of contrast) was measured both with and without marker cues that directed the attention of observers to the target location. With the probability correct clamped at 80%, contrast sensitivity increased an average of 77% when the marker was added to the thin-cylinder target. There was an insignificant effect on the thick cylinder. The large enhancement with the thin cylinder was obtained even though the target was located exactly in the center of a small panel, giving observers the impression that it was well localized. Psychometric functions consisting of d' plotted as a function of the square root of the signal-energy-to-noise-ratio gave a positive x intercept for the case of the thin cylinder without a marker. This x intercept, characteristic of uncertainty in other types of detection experiments, disappeared when the marker was added or when the thick cylinder was used. Inherent location uncertainty was further characterized by using four different markers with varying proximity to the target. Visual detection by human observers increased monotonically as the markers better localized the target. Human performance was modeled as a matched-filter detector with an uncertainty in the placement of the template. The removal of a location cue was modeled by introducing a location uncertainty of approximately equals 0.4 mm on the display device or only 7 microm on the retina, a size on the order of a single photoreceptor field. We conclude that detection is affected by target location uncertainty on the order of cellular dimensions, an observation with important implications for detection mechanisms in humans. In medical imaging, the results argue strongly for inclusion of high-contrast visualization markers on catheters and other interventional devices.
Shape and texture fused recognition of flying targets
NASA Astrophysics Data System (ADS)
Kovács, Levente; Utasi, Ákos; Kovács, Andrea; Szirányi, Tamás
2011-06-01
This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted shape and object texture information, for application areas like alerting, recognition and tracking. Targets are extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware components (CPU, camera and GPS).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ragan, Eric D.; Bowman, Doug A.; Kopper, Regis
Virtual reality training systems are commonly used in a variety of domains, and it is important to understand how the realism of a training simulation influences training effectiveness. The paper presents a framework for evaluating the effects of virtual reality fidelity based on an analysis of a simulation’s display, interaction, and scenario components. Following this framework, we conducted a controlled experiment to test the effects of fidelity on training effectiveness for a visual scanning task. The experiment varied the levels of field of view and visual realism during a training phase and then evaluated scanning performance with the simulator’s highestmore » level of fidelity. To assess scanning performance, we measured target detection and adherence to a prescribed strategy. The results show that both field of view and visual realism significantly affected target detection during training; higher field of view led to better performance and higher visual realism worsened performance. Additionally, the level of visual realism during training significantly affected learning of the prescribed visual scanning strategy, providing evidence that high visual realism was important for learning the technique. The results also demonstrate that task performance during training was not always a sufficient measure of mastery of an instructed technique. That is, if learning a prescribed strategy or skill is the goal of a training exercise, performance in a simulation may not be an appropriate indicator of effectiveness outside of training—evaluation in a more realistic setting may be necessary.« less
Automatic detection of small surface targets with electro-optical sensors in a harbor environment
NASA Astrophysics Data System (ADS)
Bouma, Henri; de Lange, Dirk-Jan J.; van den Broek, Sebastiaan P.; Kemp, Rob A. W.; Schwering, Piet B. W.
2008-10-01
In modern warfare scenarios naval ships must operate in coastal environments. These complex environments, in bays and narrow straits, with cluttered littoral backgrounds and many civilian ships may contain asymmetric threats of fast targets, such as rhibs, cabin boats and jet-skis. Optical sensors, in combination with image enhancement and automatic detection, assist an operator to reduce the response time, which is crucial for the protection of the naval and land-based supporting forces. In this paper, we present our work on automatic detection of small surface targets which includes multi-scale horizon detection and robust estimation of the background intensity. To evaluate the performance of our detection technology, data was recorded with both infrared and visual-light cameras in a coastal zone and in a harbor environment. During these trials multiple small targets were used. Results of this evaluation are shown in this paper.
Visual search and segregation as a function of display complexity.
Scharroo, J; Stalmeier, P F; Boselie, F
1994-01-01
Complexity is proposed as an important psychological factor in search and segregation tasks. Displays were presented with target and nontarget areas that were each built up of one type of randomly rotated micropatterns. We manipulated experimentally (a) the complexity of the target elements, as measured by Garner's (1970) invariance criterion; (b) the complexity of the target region; (c) the complexity of the nontargets; and (d) the number of elements within a target region. The main result is that detectability increases when the within-region complexity of the target and the nontarget regions decreases. Furthermore, interactions between the target and nontarget areas affect detectability too: We found that search asymmetry is produced by the asymmetrical effect of complexity when target and nontarget areas are interchanged.
Effects of feature-selective and spatial attention at different stages of visual processing.
Andersen, Søren K; Fuchs, Sandra; Müller, Matthias M
2011-01-01
We investigated mechanisms of concurrent attentional selection of location and color using electrophysiological measures in human subjects. Two completely overlapping random dot kinematograms (RDKs) of two different colors were presented on either side of a central fixation cross. On each trial, participants attended one of these four RDKs, defined by its specific combination of color and location, in order to detect coherent motion targets. Sustained attentional selection while monitoring for targets was measured by means of steady-state visual evoked potentials (SSVEPs) elicited by the frequency-tagged RDKs. Attentional selection of transient targets and distractors was assessed by behavioral responses and by recording event-related potentials to these stimuli. Spatial attention and attention to color had independent and largely additive effects on the amplitudes of SSVEPs elicited in early visual areas. In contrast, behavioral false alarms and feature-selective modulation of P3 amplitudes to targets and distractors were limited to the attended location. These results suggest that feature-selective attention produces an early, global facilitation of stimuli having the attended feature throughout the visual field, whereas the discrimination of target events takes place at a later stage of processing that is only applied to stimuli at the attended position.
Gintautas, Vadas; Ham, Michael I.; Kunsberg, Benjamin; Barr, Shawn; Brumby, Steven P.; Rasmussen, Craig; George, John S.; Nemenman, Ilya; Bettencourt, Luís M. A.; Kenyon, Garret T.
2011-01-01
Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas. PMID:21998562
Attention Modulates Visual-Tactile Interaction in Spatial Pattern Matching
Göschl, Florian; Engel, Andreas K.; Friese, Uwe
2014-01-01
Factors influencing crossmodal interactions are manifold and operate in a stimulus-driven, bottom-up fashion, as well as via top-down control. Here, we evaluate the interplay of stimulus congruence and attention in a visual-tactile task. To this end, we used a matching paradigm requiring the identification of spatial patterns that were concurrently presented visually on a computer screen and haptically to the fingertips by means of a Braille stimulator. Stimulation in our paradigm was always bimodal with only the allocation of attention being manipulated between conditions. In separate blocks of the experiment, participants were instructed to (a) focus on a single modality to detect a specific target pattern, (b) pay attention to both modalities to detect a specific target pattern, or (c) to explicitly evaluate if the patterns in both modalities were congruent or not. For visual as well as tactile targets, congruent stimulus pairs led to quicker and more accurate detection compared to incongruent stimulation. This congruence facilitation effect was more prominent under divided attention. Incongruent stimulation led to behavioral decrements under divided attention as compared to selectively attending a single sensory channel. Additionally, when participants were asked to evaluate congruence explicitly, congruent stimulation was associated with better performance than incongruent stimulation. Our results extend previous findings from audiovisual studies, showing that stimulus congruence also resulted in behavioral improvements in visuotactile pattern matching. The interplay of stimulus processing and attentional control seems to be organized in a highly flexible fashion, with the integration of signals depending on both bottom-up and top-down factors, rather than occurring in an ‘all-or-nothing’ manner. PMID:25203102
Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.
2000-01-01
Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.
T, Sathish Kumar; A, Navaneeth Krishnan; J, Joseph Sahaya Rajan; M, Makesh; K P, Jithendran; S V, Alavandi; K K, Vijayan
2018-05-01
The emerging microsporidian parasite Enterocytozoon hepatopenaei (EHP), the causative agent of hepatopancreatic microsporidiosis, has been widely reported in shrimp-farming countries. EHP infection can be detected by light microscopy observation of spores (1.7 × 1 μm) in stained hepatopancreas (HP) tissue smears, HP tissue sections, and fecal samples. EHP can also be detected by polymerase chain reaction (PCR) targeting the small subunit (SSU) ribosomal RNA (rRNA) gene or the spore wall protein gene (SWP). In this study, a rapid, sensitive, specific, and closed tube visual loop-mediated isothermal amplification (LAMP) protocol combined with FTA cards was developed for the diagnosis of EHP. LAMP primers were designed based on the SSU rRNA gene of EHP. The target sequence of EHP was amplified at constant temperature of 65 °C for 45 min and amplified LAMP products were visually detected in a closed tube system by using SYBR™ green I dye. Detection limit of this LAMP protocol was ten copies. Field and clinical applicability of this assay was evaluated using 162 field samples including 106 HP tissue samples and 56 fecal samples collected from shrimp farms. Out of 162 samples, EHP could be detected in 62 samples (47 HP samples and 15 fecal samples). When compared with SWP-PCR as the gold standard, this EHP LAMP assay had 95.31% sensitivity, 98.98% specificity, and a kappa value of 0.948. This simple, closed tube, clinically evaluated visual LAMP assay has great potential for diagnosing EHP at the farm level, particularly under low-resource circumstances.
Temporal allocation of attention toward threat in individuals with posttraumatic stress symptoms.
Amir, Nader; Taylor, Charles T; Bomyea, Jessica A; Badour, Christal L
2009-12-01
Research suggests that individuals with posttraumatic stress disorder (PTSD) selectively attend to threat-relevant information. However, little is known about how initial detection of threat influences the processing of subsequently encountered stimuli. To address this issue, we used a rapid serial visual presentation paradigm (RSVP; Raymond, J. E., Shapiro, K. L., & Arnell, K. M. (1992). Temporary suppression of visual processing in an RSVP task: An attentional blink? Journal of Experimental Psychology: Human Perception and Performance, 18, 849-860) to examine temporal allocation of attention to threat-related and neutral stimuli in individuals with PTSD symptoms (PTS), traumatized individuals without PTSD symptoms (TC), and non-anxious controls (NAC). Participants were asked to identify one or two targets in an RSVP stream. Typically processing of the first target decreases accuracy of identifying the second target as a function of the temporal lag between targets. Results revealed that the PTS group was significantly more accurate in detecting a neutral target when it was presented 300 or 500ms after threat-related stimuli compared to when the target followed neutral stimuli. These results suggest that individuals with PTSD may process trauma-relevant information more rapidly and efficiently than benign information.
Causal Inference for Spatial Constancy across Saccades
Atsma, Jeroen; Maij, Femke; Koppen, Mathieu; Irwin, David E.; Medendorp, W. Pieter
2016-01-01
Our ability to interact with the environment hinges on creating a stable visual world despite the continuous changes in retinal input. To achieve visual stability, the brain must distinguish the retinal image shifts caused by eye movements and shifts due to movements of the visual scene. This process appears not to be flawless: during saccades, we often fail to detect whether visual objects remain stable or move, which is called saccadic suppression of displacement (SSD). How does the brain evaluate the memorized information of the presaccadic scene and the actual visual feedback of the postsaccadic visual scene in the computations for visual stability? Using a SSD task, we test how participants localize the presaccadic position of the fixation target, the saccade target or a peripheral non-foveated target that was displaced parallel or orthogonal during a horizontal saccade, and subsequently viewed for three different durations. Results showed different localization errors of the three targets, depending on the viewing time of the postsaccadic stimulus and its spatial separation from the presaccadic location. We modeled the data through a Bayesian causal inference mechanism, in which at the trial level an optimal mixing of two possible strategies, integration vs. separation of the presaccadic memory and the postsaccadic sensory signals, is applied. Fits of this model generally outperformed other plausible decision strategies for producing SSD. Our findings suggest that humans exploit a Bayesian inference process with two causal structures to mediate visual stability. PMID:26967730
A comparison study of visually stimulated brain-computer and eye-tracking interfaces
NASA Astrophysics Data System (ADS)
Suefusa, Kaori; Tanaka, Toshihisa
2017-06-01
Objective. Brain-computer interfacing (BCI) based on visual stimuli detects the target on a screen on which a user is focusing. The detection of the gazing target can be achieved by tracking gaze positions with a video camera, which is called eye-tracking or eye-tracking interfaces (ETIs). The two types of interface have been developed in different communities. Thus, little work on a comprehensive comparison between these two types of interface has been reported. This paper quantitatively compares the performance of these two interfaces on the same experimental platform. Specifically, our study is focused on two major paradigms of BCI and ETI: steady-state visual evoked potential-based BCIs and dwelling-based ETIs. Approach. Recognition accuracy and the information transfer rate were measured by giving subjects the task of selecting one of four targets by gazing at it. The targets were displayed in three different sizes (with sides 20, 40 and 60 mm long) to evaluate performance with respect to the target size. Main results. The experimental results showed that the BCI was comparable to the ETI in terms of accuracy and the information transfer rate. In particular, when the size of a target was relatively small, the BCI had significantly better performance than the ETI. Significance. The results on which of the two interfaces works better in different situations would not only enable us to improve the design of the interfaces but would also allow for the appropriate choice of interface based on the situation. Specifically, one can choose an interface based on the size of the screen that displays the targets.
Failures of Perception in the Low-Prevalence Effect: Evidence From Active and Passive Visual Search
Hout, Michael C.; Walenchok, Stephen C.; Goldinger, Stephen D.; Wolfe, Jeremy M.
2017-01-01
In visual search, rare targets are missed disproportionately often. This low-prevalence effect (LPE) is a robust problem with demonstrable societal consequences. What is the source of the LPE? Is it a perceptual bias against rare targets or a later process, such as premature search termination or motor response errors? In 4 experiments, we examined the LPE using standard visual search (with eye tracking) and 2 variants of rapid serial visual presentation (RSVP) in which observers made present/absent decisions after sequences ended. In all experiments, observers looked for 2 target categories (teddy bear and butterfly) simultaneously. To minimize simple motor errors, caused by repetitive absent responses, we held overall target prevalence at 50%, with 1 low-prevalence and 1 high-prevalence target type. Across conditions, observers either searched for targets among other real-world objects or searched for specific bears or butterflies among within-category distractors. We report 4 main results: (a) In standard search, high-prevalence targets were found more quickly and accurately than low-prevalence targets. (b) The LPE persisted in RSVP search, even though observers never terminated search on their own. (c) Eye-tracking analyses showed that high-prevalence targets elicited better attentional guidance and faster perceptual decisions. And (d) even when observers looked directly at low-prevalence targets, they often (12%–34% of trials) failed to detect them. These results strongly argue that low-prevalence misses represent failures of perception when early search termination or motor errors are controlled. PMID:25915073
A Visual Detection Learning Model
NASA Technical Reports Server (NTRS)
Beard, Bettina L.; Ahumada, Albert J., Jr.; Trejo, Leonard (Technical Monitor)
1998-01-01
Our learning model has memory templates representing the target-plus-noise and noise-alone stimulus sets. The best correlating template determines the response. The correlations and the feedback participate in the additive template updating rule. The model can predict the relative thresholds for detection in random, fixed and twin noise.
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection
Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun
2016-01-01
Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE. PMID:27447635
Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection.
Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun
2016-07-19
Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated by OKTAL-SE.
Comparing visual search and eye movements in bilinguals and monolinguals
Hout, Michael C.; Walenchok, Stephen C.; Azuma, Tamiko; Goldinger, Stephen D.
2017-01-01
Recent research has suggested that bilinguals show advantages over monolinguals in visual search tasks, although these findings have been derived from global behavioral measures of accuracy and response times. In the present study we sought to explore the bilingual advantage by using more sensitive eyetracking techniques across three visual search experiments. These spatially and temporally fine-grained measures allowed us to carefully investigate any nuanced attentional differences between bilinguals and monolinguals. Bilingual and monolingual participants completed visual search tasks that varied in difficulty. The experiments required participants to make careful discriminations in order to detect target Landolt Cs among similar distractors. In Experiment 1, participants performed both feature and conjunction search. In Experiments 2 and 3, participants performed visual search while making different types of speeded discriminations, after either locating the target or mentally updating a constantly changing target. The results across all experiments revealed that bilinguals and monolinguals were equally efficient at guiding attention and generating responses. These findings suggest that the bilingual advantage does not reflect a general benefit in attentional guidance, but could reflect more efficient guidance only under specific task demands. PMID:28508116
Working memory enhances visual perception: evidence from signal detection analysis.
Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W
2010-03-01
We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be re-presented in the display, where it surrounded either the target (on valid trials) or a distractor (on invalid trials). Perceptual identification of the target, as indexed by A', was enhanced on valid relative to invalid trials but only when the cue was kept in WM. There was minimal effect of the cue when it was merely attended and not kept in WM. Verbal cues were as effective as visual cues at modulating perceptual identification, and the effects were independent of the effects of target saliency. Matches to the contents of WM influenced perceptual sensitivity even under conditions that minimized competition for selecting the target. WM cues were also effective when targets were less likely to fall in a repeated WM stimulus than in other stimuli in the search display. There were no effects of WM on decisional criteria, in contrast to sensitivity. The findings suggest that reentrant feedback from WM can affect early stages of perceptual processing.
Zellin, Martina; Conci, Markus; von Mühlenen, Adrian; Müller, Hermann J
2011-10-01
Visual search for a target object is facilitated when the object is repeatedly presented within an invariant context of surrounding items ("contextual cueing"; Chun & Jiang, Cognitive Psychology, 36, 28-71, 1998). The present study investigated whether such invariant contexts can cue more than one target location. In a series of three experiments, we showed that contextual cueing is significantly reduced when invariant contexts are paired with two rather than one possible target location, whereas no contextual cueing occurs with three distinct target locations. Closer data inspection revealed that one "dominant" target always exhibited substantially more contextual cueing than did the other, "minor" target(s), which caused negative contextual-cueing effects. However, minor targets could benefit from the invariant context when they were spatially close to the dominant target. In sum, our experiments suggest that contextual cueing can guide visual attention to a spatially limited region of the display, only enhancing the detection of targets presented inside that region.
Contextual remapping in visual search after predictable target-location changes.
Conci, Markus; Sun, Luning; Müller, Hermann J
2011-07-01
Invariant spatial context can facilitate visual search. For instance, detection of a target is faster if it is presented within a repeatedly encountered, as compared to a novel, layout of nontargets, demonstrating a role of contextual learning for attentional guidance ('contextual cueing'). Here, we investigated how context-based learning adapts to target location (and identity) changes. Three experiments were performed in which, in an initial learning phase, observers learned to associate a given context with a given target location. A subsequent test phase then introduced identity and/or location changes to the target. The results showed that contextual cueing could not compensate for target changes that were not 'predictable' (i.e. learnable). However, for predictable changes, contextual cueing remained effective even immediately after the change. These findings demonstrate that contextual cueing is adaptive to predictable target location changes. Under these conditions, learned contextual associations can be effectively 'remapped' to accommodate new task requirements.
Visalli, Antonino; Vallesi, Antonino
2018-01-01
Visual search tasks have often been used to investigate how cognitive processes change with expertise. Several studies have shown visual experts' advantages in detecting objects related to their expertise. Here, we tried to extend these findings by investigating whether professional search experience could boost top-down monitoring processes involved in visual search, independently of advantages specific to objects of expertise. To this aim, we recruited a group of quality-control workers employed in citrus farms. Given the specific features of this type of job, we expected that the extensive employment of monitoring mechanisms during orange selection could enhance these mechanisms even in search situations in which orange-related expertise is not suitable. To test this hypothesis, we compared performance of our experimental group and of a well-matched control group on a computerized visual search task. In one block the target was an orange (expertise target) while in the other block the target was a Smurfette doll (neutral target). The a priori hypothesis was to find an advantage for quality-controllers in those situations in which monitoring was especially involved, that is, when deciding the presence/absence of the target required a more extensive inspection of the search array. Results were consistent with our hypothesis. Quality-controllers were faster in those conditions that extensively required monitoring processes, specifically, the Smurfette-present and both target-absent conditions. No differences emerged in the orange-present condition, which resulted to mainly rely on bottom-up processes. These results suggest that top-down processes in visual search can be enhanced through immersive real-life experience beyond visual expertise advantages. PMID:29497392
Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue
2009-06-15
Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.
Visual search in Dementia with Lewy Bodies and Alzheimer's disease.
Landy, Kelly M; Salmon, David P; Filoteo, J Vincent; Heindel, William C; Galasko, Douglas; Hamilton, Joanne M
2015-12-01
Visual search is an aspect of visual cognition that may be more impaired in Dementia with Lewy Bodies (DLB) than Alzheimer's disease (AD). To assess this possibility, the present study compared patients with DLB (n = 17), AD (n = 30), or Parkinson's disease with dementia (PDD; n = 10) to non-demented patients with PD (n = 18) and normal control (NC) participants (n = 13) on single-feature and feature-conjunction visual search tasks. In the single-feature task participants had to determine if a target stimulus (i.e., a black dot) was present among 3, 6, or 12 distractor stimuli (i.e., white dots) that differed in one salient feature. In the feature-conjunction task participants had to determine if a target stimulus (i.e., a black circle) was present among 3, 6, or 12 distractor stimuli (i.e., white dots and black squares) that shared either of the target's salient features. Results showed that target detection time in the single-feature task was not influenced by the number of distractors (i.e., "pop-out" effect) for any of the groups. In contrast, target detection time increased as the number of distractors increased in the feature-conjunction task for all groups, but more so for patients with AD or DLB than for any of the other groups. These results suggest that the single-feature search "pop-out" effect is preserved in DLB and AD patients, whereas ability to perform the feature-conjunction search is impaired. This pattern of preserved single-feature search with impaired feature-conjunction search is consistent with a deficit in feature binding that may be mediated by abnormalities in networks involving the dorsal occipito-parietal cortex. Copyright © 2015 Elsevier Ltd. All rights reserved.
Age-Related Changes in the Ability to Switch between Temporal and Spatial Attention
Callaghan, Eleanor; Holland, Carol; Kessler, Klaus
2017-01-01
Background: Identifying age-related changes in cognition that contribute towards reduced driving performance is important for the development of interventions to improve older adults’ driving and prolong the time that they can continue to drive. While driving, one is often required to switch from attending to events changing in time, to distribute attention spatially. Although there is extensive research into both spatial attention and temporal attention and how these change with age, the literature on switching between these modalities of attention is limited within any age group. Methods: Age groups (21–30, 40–49, 50–59, 60–69 and 70+ years) were compared on their ability to switch between detecting a target in a rapid serial visual presentation (RSVP) stream and detecting a target in a visual search display. To manipulate the cost of switching, the target in the RSVP stream was either the first item in the stream (Target 1st), towards the end of the stream (Target Mid), or absent from the stream (Distractor Only). Visual search response times and accuracy were recorded. Target 1st trials behaved as no-switch trials, as attending to the remaining stream was not necessary. Target Mid and Distractor Only trials behaved as switch trials, as attending to the stream to the end was required. Results: Visual search response times (RTs) were longer on “Target Mid” and “Distractor Only” trials in comparison to “Target 1st” trials, reflecting switch-costs. Larger switch-costs were found in both the 40–49 and 60–69 years group in comparison to the 21–30 years group when switching from the Target Mid condition. Discussion: Findings warrant further exploration as to whether there are age-related changes in the ability to switch between these modalities of attention while driving. If older adults display poor performance when switching between temporal and spatial attention while driving, then the development of an intervention to preserve and improve this ability would be beneficial. PMID:28261088
Functional modular architecture underlying attentional control in aging.
Monge, Zachary A; Geib, Benjamin R; Siciliano, Rachel E; Packard, Lauren E; Tallman, Catherine W; Madden, David J
2017-07-15
Previous research suggests that age-related differences in attention reflect the interaction of top-down and bottom-up processes, but the cognitive and neural mechanisms underlying this interaction remain an active area of research. Here, within a sample of community-dwelling adults 19-78 years of age, we used diffusion reaction time (RT) modeling and multivariate functional connectivity to investigate the behavioral components and whole-brain functional networks, respectively, underlying bottom-up and top-down attentional processes during conjunction visual search. During functional MRI scanning, participants completed a conjunction visual search task in which each display contained one item that was larger than the other items (i.e., a size singleton) but was not informative regarding target identity. This design allowed us to examine in the RT components and functional network measures the influence of (a) additional bottom-up guidance when the target served as the size singleton, relative to when the distractor served as the size singleton (i.e., size singleton effect) and (b) top-down processes during target detection (i.e., target detection effect; target present vs. absent trials). We found that the size singleton effect (i.e., increased bottom-up guidance) was associated with RT components related to decision and nondecision processes, but these effects did not vary with age. Also, a modularity analysis revealed that frontoparietal module connectivity was important for both the size singleton and target detection effects, but this module became central to the networks through different mechanisms for each effect. Lastly, participants 42 years of age and older, in service of the target detection effect, relied more on between-frontoparietal module connections. Our results further elucidate mechanisms through which frontoparietal regions support attentional control and how these mechanisms vary in relation to adult age. Copyright © 2017 Elsevier Inc. All rights reserved.
Improving resolution of dynamic communities in human brain networks through targeted node removal
Turner, Benjamin O.; Miller, Michael B.; Carlson, Jean M.
2017-01-01
Current approaches to dynamic community detection in complex networks can fail to identify multi-scale community structure, or to resolve key features of community dynamics. We propose a targeted node removal technique to improve the resolution of community detection. Using synthetic oscillator networks with well-defined “ground truth” communities, we quantify the community detection performance of a common modularity maximization algorithm. We show that the performance of the algorithm on communities of a given size deteriorates when these communities are embedded in multi-scale networks with communities of different sizes, compared to the performance in a single-scale network. We demonstrate that targeted node removal during community detection improves performance on multi-scale networks, particularly when removing the most functionally cohesive nodes. Applying this approach to network neuroscience, we compare dynamic functional brain networks derived from fMRI data taken during both repetitive single-task and varied multi-task experiments. After the removal of regions in visual cortex, the most coherent functional brain area during the tasks, community detection is better able to resolve known functional brain systems into communities. In addition, node removal enables the algorithm to distinguish clear differences in brain network dynamics between these experiments, revealing task-switching behavior that was not identified with the visual regions present in the network. These results indicate that targeted node removal can improve spatial and temporal resolution in community detection, and they demonstrate a promising approach for comparison of network dynamics between neuroscientific data sets with different resolution parameters. PMID:29261662
Marchant, Jennifer L; Ruff, Christian C; Driver, Jon
2012-01-01
The brain seeks to combine related inputs from different senses (e.g., hearing and vision), via multisensory integration. Temporal information can indicate whether stimuli in different senses are related or not. A recent human fMRI study (Noesselt et al. [2007]: J Neurosci 27:11431–11441) used auditory and visual trains of beeps and flashes with erratic timing, manipulating whether auditory and visual trains were synchronous or unrelated in temporal pattern. A region of superior temporal sulcus (STS) showed higher BOLD signal for the synchronous condition. But this could not be related to performance, and it remained unclear if the erratic, unpredictable nature of the stimulus trains was important. Here we compared synchronous audiovisual trains to asynchronous trains, while using a behavioral task requiring detection of higher-intensity target events in either modality. We further varied whether the stimulus trains had predictable temporal pattern or not. Synchrony (versus lag) between auditory and visual trains enhanced behavioral sensitivity (d') to intensity targets in either modality, regardless of predictable versus unpredictable patterning. The analogous contrast in fMRI revealed BOLD increases in several brain areas, including the left STS region reported by Noesselt et al. [2007: J Neurosci 27:11431–11441]. The synchrony effect on BOLD here correlated with the subject-by-subject impact on performance. Predictability of temporal pattern did not affect target detection performance or STS activity, but did lead to an interaction with audiovisual synchrony for BOLD in inferior parietal cortex. PMID:21953980
Small maritime target detection through false color fusion
NASA Astrophysics Data System (ADS)
Toet, Alexander; Wu, Tirui
2008-04-01
We present an algorithm that produces a fused false color representation of a combined multiband IR and visual imaging system for maritime applications. Multispectral IR imaging techniques are increasingly deployed in maritime operations, to detect floating mines or to find small dinghies and swimmers during search and rescue operations. However, maritime backgrounds usually contain a large amount of clutter that severely hampers the detection of small targets. Our new algorithm deploys the correlation between the target signatures in two different IR frequency bands (3-5 and 8-12 μm) to construct a fused IR image with a reduced amount of clutter. The fused IR image is then combined with a visual image in a false color RGB representation for display to a human operator. The algorithm works as follows. First, both individual IR bands are filtered with a morphological opening top-hat transform to extract small details. Second, a common image is extracted from the two filtered IR bands, and assigned to the red channel of an RGB image. Regions of interest that appear in both IR bands remain in this common image, while most uncorrelated noise details are filtered out. Third, the visual band is assigned to the green channel and, after multiplication with a constant (typically 1.6) also to the blue channel. Fourth, the brightness and colors of this intermediate false color image are renormalized by adjusting its first order statistics to those of a representative reference scene. The result of these four steps is a fused color image, with naturalistic colors (bluish sky and grayish water), in which small targets are clearly visible.
Sub-surface defects detection of by using active thermography and advanced image edge detection
NASA Astrophysics Data System (ADS)
Tse, Peter W.; Wang, Gaochao
2017-05-01
Active or pulsed thermography is a popular non-destructive testing (NDT) tool for inspecting the integrity and anomaly of industrial equipment. One of the recent research trends in using active thermography is to automate the process in detecting hidden defects. As of today, human effort has still been using to adjust the temperature intensity of the thermo camera in order to visually observe the difference in cooling rates caused by a normal target as compared to that by a sub-surface crack exists inside the target. To avoid the tedious human-visual inspection and minimize human induced error, this paper reports the design of an automatic method that is capable of detecting subsurface defects. The method used the technique of active thermography, edge detection in machine vision and smart algorithm. An infrared thermo-camera was used to capture a series of temporal pictures after slightly heating up the inspected target by flash lamps. Then the Canny edge detector was employed to automatically extract the defect related images from the captured pictures. The captured temporal pictures were preprocessed by a packet of Canny edge detector and then a smart algorithm was used to reconstruct the whole sequences of image signals. During the processes, noise and irrelevant backgrounds exist in the pictures were removed. Consequently, the contrast of the edges of defective areas had been highlighted. The designed automatic method was verified by real pipe specimens that contains sub-surface cracks. After applying such smart method, the edges of cracks can be revealed visually without the need of using manual adjustment on the setting of thermo-camera. With the help of this automatic method, the tedious process in manually adjusting the colour contract and the pixel intensity in order to reveal defects can be avoided.
Peltier, Chad; Becker, Mark W
2017-05-01
Target prevalence influences visual search behavior. At low target prevalence, miss rates are high and false alarms are low, while the opposite is true at high prevalence. Several models of search aim to describe search behavior, one of which has been specifically intended to model search at varying prevalence levels. The multiple decision model (Wolfe & Van Wert, Current Biology, 20(2), 121--124, 2010) posits that all searches that end before the observer detects a target result in a target-absent response. However, researchers have found very high false alarms in high-prevalence searches, suggesting that prevalence rates may be used as a source of information to make "educated guesses" after search termination. Here, we further examine the ability for prevalence level and knowledge gained during visual search to influence guessing rates. We manipulate target prevalence and the amount of information that an observer accumulates about a search display prior to making a response to test if these sources of evidence are used to inform target present guess rates. We find that observers use both information about target prevalence rates and information about the proportion of the array inspected prior to making a response allowing them to make an informed and statistically driven guess about the target's presence.
Distractor-Induced Blindness: A Special Case of Contingent Attentional Capture?
Winther, Gesche N.; Niedeggen, Michael
2017-01-01
The detection of a salient visual target embedded in a rapid serial visual presentation (RSVP) can be severely affected if target-like distractors are presented previously. This phenomenon, known as distractor-induced blindness (DIB), shares the prerequisites of contingent attentional capture (Folk, Remington, & Johnston, 1992). In both, target processing is transiently impaired by the presentation of distractors defined by similar features. In the present study, we investigated whether the speeded response to a target in the DIB paradigm can be described in terms of a contingent attentional capture process. In the first experiments, multiple distractors were embedded in the RSVP stream. Distractors either shared the target’s visual features (Experiment 1A) or differed from them (Experiment 1B). Congruent with hypotheses drawn from contingent attentional capture theory, response times (RTs) were exclusively impaired in conditions with target-like distractors. However, RTs were not impaired if only one single target-like distractor was presented (Experiment 2). If attentional capture directly contributed to DIB, the single distractor should be sufficient to impair target processing. In conclusion, DIB is not due to contingent attentional capture, but may rely on a central suppression process triggered by multiple distractors. PMID:28439320
B-spline based image tracking by detection
NASA Astrophysics Data System (ADS)
Balaji, Bhashyam; Sithiravel, Rajiv; Damini, Anthony; Kirubarajan, Thiagalingam; Rajan, Sreeraman
2016-05-01
Visual image tracking involves the estimation of the motion of any desired targets in a surveillance region using a sequence of images. A standard method of isolating moving targets in image tracking uses background subtraction. The standard background subtraction method is often impacted by irrelevant information in the images, which can lead to poor performance in image-based target tracking. In this paper, a B-Spline based image tracking is implemented. The novel method models the background and foreground using the B-Spline method followed by a tracking-by-detection algorithm. The effectiveness of the proposed algorithm is demonstrated.
Psychophysical Criteria for Visual Simulation Systems.
1980-05-01
definitive data were found to estab- lish detection thresholds; therefore, this is one area where a psycho- physical study was recommended. Differential size...The specific functional relationships needinq quantification were the following: 1. The effect of Horizontal Aniseikonia on Target Detection and...Transition Technique 6. The Effects of Scene Complexity and Separation on the Detection of Scene Misalignment 7. Absolute Brightness Levels in
Corollary discharge contributes to perceived eye location in monkeys
Cavanaugh, James; FitzGibbon, Edmond J.; Wurtz, Robert H.
2013-01-01
Despite saccades changing the image on the retina several times per second, we still perceive a stable visual world. A possible mechanism underlying this stability is that an internal retinotopic map is updated with each saccade, with the location of objects being compared before and after the saccade. Psychophysical experiments have shown that humans derive such location information from a corollary discharge (CD) accompanying saccades. Such a CD has been identified in the monkey brain in a circuit extending from superior colliculus to frontal cortex. There is a missing piece, however. Perceptual localization is established only in humans and the CD circuit only in monkeys. We therefore extended measurement of perceptual localization to the monkey by adapting the target displacement detection task developed in humans. During saccades to targets, the target disappeared and then reappeared, sometimes at a different location. The monkeys reported the displacement direction. Detections of displacement were similar in monkeys and humans, but enhanced detection of displacement from blanking the target at the end of the saccade was observed only in humans, not in monkeys. Saccade amplitude varied across trials, but the monkey's estimates of target location did not follow that variation, indicating that eye location depended on an internal CD rather than external visual information. We conclude that monkeys use a CD to determine their new eye location after each saccade, just as humans do. PMID:23986562
Corollary discharge contributes to perceived eye location in monkeys.
Joiner, Wilsaan M; Cavanaugh, James; FitzGibbon, Edmond J; Wurtz, Robert H
2013-11-01
Despite saccades changing the image on the retina several times per second, we still perceive a stable visual world. A possible mechanism underlying this stability is that an internal retinotopic map is updated with each saccade, with the location of objects being compared before and after the saccade. Psychophysical experiments have shown that humans derive such location information from a corollary discharge (CD) accompanying saccades. Such a CD has been identified in the monkey brain in a circuit extending from superior colliculus to frontal cortex. There is a missing piece, however. Perceptual localization is established only in humans and the CD circuit only in monkeys. We therefore extended measurement of perceptual localization to the monkey by adapting the target displacement detection task developed in humans. During saccades to targets, the target disappeared and then reappeared, sometimes at a different location. The monkeys reported the displacement direction. Detections of displacement were similar in monkeys and humans, but enhanced detection of displacement from blanking the target at the end of the saccade was observed only in humans, not in monkeys. Saccade amplitude varied across trials, but the monkey's estimates of target location did not follow that variation, indicating that eye location depended on an internal CD rather than external visual information. We conclude that monkeys use a CD to determine their new eye location after each saccade, just as humans do.
Long-Term Memory Biases Auditory Spatial Attention
ERIC Educational Resources Information Center
Zimmermann, Jacqueline F.; Moscovitch, Morris; Alain, Claude
2017-01-01
Long-term memory (LTM) has been shown to bias attention to a previously learned visual target location. Here, we examined whether memory-predicted spatial location can facilitate the detection of a faint pure tone target embedded in real world audio clips (e.g., soundtrack of a restaurant). During an initial familiarization task, participants…
Goschy, Harriet; Bakos, Sarolta; Müller, Hermann J; Zehetleitner, Michael
2014-01-01
Targets in a visual search task are detected faster if they appear in a probable target region as compared to a less probable target region, an effect which has been termed "probability cueing." The present study investigated whether probability cueing cannot only speed up target detection, but also minimize distraction by distractors in probable distractor regions as compared to distractors in less probable distractor regions. To this end, three visual search experiments with a salient, but task-irrelevant, distractor ("additional singleton") were conducted. Experiment 1 demonstrated that observers can utilize uneven spatial distractor distributions to selectively reduce interference by distractors in frequent distractor regions as compared to distractors in rare distractor regions. Experiments 2 and 3 showed that intertrial facilitation, i.e., distractor position repetitions, and statistical learning (independent of distractor position repetitions) both contribute to the probability cueing effect for distractor locations. Taken together, the present results demonstrate that probability cueing of distractor locations has the potential to serve as a strong attentional cue for the shielding of likely distractor locations.
Electrophysiological correlates of target eccentricity in texture segmentation.
Schaffer, Susann; Schubö, Anna; Meinecke, Cristina
2011-06-01
Event-related potentials and behavioural performance as a function of target eccentricity were measured while subjects performed a texture segmentation task. Fit-of-structures, i.e. easiness of target detection was varied: in Experiment 1, a texture with peripheral fit (easier detection of peripheral presented targets) and in Experiment 2, a texture with foveal fit (easier detection of foveal presented targets) was used. In the two experiments, the N2p was sensitive to target eccentricity showing larger amplitudes for foveal targets compared to peripheral targets, and at the foveal position, a reversal of the N2p differential amplitude effect was found. The anterior P2 seemed sensitive to the easiness of target detection. In both experiments the N2pc varied as a function of eccentricity. However, the P3 was neither sensitive to target eccentricity nor to the fit-of-structures. Results show the existence of a P2/N2 complex (Potts and Tucker, 2001) indicating executive functions located in the anterior cortex and perceptual processes located in the posterior cortex. Furthermore, the N2p might indicate the existence of a foveal vs. peripheral subsystem in visual processing. 2011 Elsevier B.V. All rights reserved.
Koda, Hiroki; Sato, Anna; Kato, Akemi
2013-09-01
Humans innately perceive infantile features as cute. The ethologist Konrad Lorenz proposed that the infantile features of mammals and birds, known as the baby schema (kindchenschema), motivate caretaking behaviour. As biologically relevant stimuli, newborns are likely to be processed specially in terms of visual attention, perception, and cognition. Recent demonstrations on human participants have shown visual attentional prioritisation to newborn faces (i.e., newborn faces capture visual attention). Although characteristics equivalent to those found in the faces of human infants are found in nonhuman primates, attentional capture by newborn faces has not been tested in nonhuman primates. We examined whether conspecific newborn faces captured the visual attention of two Japanese monkeys using a target-detection task based on dot-probe tasks commonly used in human visual attention studies. Although visual cues enhanced target detection in subject monkeys, our results, unlike those for humans, showed no evidence of an attentional prioritisation for newborn faces by monkeys. Our demonstrations showed the validity of dot-probe task for visual attention studies in monkeys and propose a novel approach to bridge the gap between human and nonhuman primate social cognition research. This suggests that attentional capture by newborn faces is not common to macaques, but it is unclear if nursing experiences influence their perception and recognition of infantile appraisal stimuli. We need additional comparative studies to reveal the evolutionary origins of baby-schema perception and recognition. Copyright © 2013 Elsevier B.V. All rights reserved.
Low target prevalence is a stubborn source of errors in visual search tasks
Wolfe, Jeremy M.; Horowitz, Todd S.; Van Wert, Michael J.; Kenner, Naomi M.; Place, Skyler S.; Kibbi, Nour
2009-01-01
In visual search tasks, observers look for targets in displays containing distractors. Likelihood that targets will be missed varies with target prevalence, the frequency with which targets are presented across trials. Miss error rates are much higher at low target prevalence (1–2%) than at high prevalence (50%). Unfortunately, low prevalence is characteristic of important search tasks like airport security and medical screening where miss errors are dangerous. A series of experiments show this prevalence effect is very robust. In signal detection terms, the prevalence effect can be explained as a criterion shift and not a change in sensitivity. Several efforts to induce observers to adopt a better criterion fail. However, a regime of brief retraining periods with high prevalence and full feedback allows observers to hold a good criterion during periods of low prevalence with no feedback. PMID:17999575
Attentional enhancement during multiple-object tracking.
Drew, Trafton; McCollough, Andrew W; Horowitz, Todd S; Vogel, Edward K
2009-04-01
What is the role of attention in multiple-object tracking? Does attention enhance target representations, suppress distractor representations, or both? It is difficult to ask this question in a purely behavioral paradigm without altering the very attentional allocation one is trying to measure. In the present study, we used event-related potentials to examine the early visual evoked responses to task-irrelevant probes without requiring an additional detection task. Subjects tracked two targets among four moving distractors and four stationary distractors. Brief probes were flashed on targets, moving distractors, stationary distractors, or empty space. We obtained a significant enhancement of the visually evoked P1 and N1 components (approximately 100-150 msec) for probes on targets, relative to distractors. Furthermore, good trackers showed larger differences between target and distractor probes than did poor trackers. These results provide evidence of early attentional enhancement of tracked target items and also provide a novel approach to measuring attentional allocation during tracking.
Madden, David J.; Parks, Emily L.; Tallman, Catherine W.; Boylan, Maria A.; Hoagey, David A.; Cocjin, Sally B.; Johnson, Micah A.; Chou, Ying-hui; Potter, Guy G.; Chen, Nan-kuei; Packard, Lauren E.; Siciliano, Rachel E.; Monge, Zachary A.; Diaz, Michele T.
2016-01-01
We conducted functional magnetic resonance imaging (fMRI) with a visual search paradigm to test the hypothesis that aging is associated with increased frontoparietal involvement in both target detection and bottom-up attentional guidance (featural salience). Participants were 68 healthy adults, distributed continuously across 19-78 years of age. Frontoparietal regions of interest (ROIs) were defined from resting-state scans obtained prior to task-related fMRI. The search target was defined by a conjunction of color and orientation. Each display contained one item that was larger than the others (i.e., a size singleton) but was not informative regarding target identity. Analyses of search reaction time (RT) indicated that bottom-up attentional guidance from the size singleton (when coincident with the target) was relatively constant as a function of age. Frontoparietal fMRI activation related to target detection was constant as a function of age, as was the reduction in activation associated with salient targets. However, for individuals 35 years of age and older, engagement of the left frontal eye field (FEF) in bottom-up guidance was more prominent than for younger individuals. Further, the age-related differences in left FEF activation were a consequence of decreasing resting-state functional connectivity in visual sensory regions. These findings indicate that age-related compensatory effects may be expressed in the relation between activation and behavior, rather than in the magnitude of activation, and that relevant changes in the activation-RT relation may begin at a relatively early point in adulthood. PMID:28052456
Detection and identification of human targets in radar data
NASA Astrophysics Data System (ADS)
Gürbüz, Sevgi Z.; Melvin, William L.; Williams, Douglas B.
2007-04-01
Radar offers unique advantages over other sensors, such as visual or seismic sensors, for human target detection. Many situations, especially military applications, prevent the placement of video cameras or implantment seismic sensors in the area being observed, because of security or other threats. However, radar can operate far away from potential targets, and functions during daytime as well as nighttime, in virtually all weather conditions. In this paper, we examine the problem of human target detection and identification using single-channel, airborne, synthetic aperture radar (SAR). Human targets are differentiated from other detected slow-moving targets by analyzing the spectrogram of each potential target. Human spectrograms are unique, and can be used not just to identify targets as human, but also to determine features about the human target being observed, such as size, gender, action, and speed. A 12-point human model, together with kinematic equations of motion for each body part, is used to calculate the expected target return and spectrogram. A MATLAB simulation environment is developed including ground clutter, human and non-human targets for the testing of spectrogram-based detection and identification algorithms. Simulations show that spectrograms have some ability to detect and identify human targets in low noise. An example gender discrimination system correctly detected 83.97% of males and 91.11% of females. The problems and limitations of spectrogram-based methods in high clutter environments are discussed. The SNR loss inherent to spectrogram-based methods is quantified. An alternate detection and identification method that will be used as a basis for future work is proposed.
Investigating the role of visual and auditory search in reading and developmental dyslexia
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a “serial” search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d′) strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in “serial” search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills. PMID:24093014
Investigating the role of visual and auditory search in reading and developmental dyslexia.
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a "serial" search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d') strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in "serial" search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills.
Detection of Subpixel Submerged Mine-Like Targets in WorldView-2 Multispectral Imagery
2012-09-01
and painted black, blue and green. The dot seen in the image by target three was a zodiac and it was only in the 21 March data set. 47 WorldView-2...region of interest (ROI) was created using band one of the covariance PCA image. The targets, buoy, and the zodiac were all considered targets. N...targets. Pixels that represented the zodiac were not segregated and found all over the visualization. For this reason, this process was followed by
Attention modulates perception of visual space
Zhou, Liu; Deng, Chenglong; Ooi, Teng Leng; He, Zijiang J.
2017-01-01
Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one’s ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche. PMID:29177198
NASA Astrophysics Data System (ADS)
An, Yun-Kyu; Song, Homin; Sohn, Hoon
2014-09-01
This paper presents a wireless ultrasonic wavefield imaging (WUWI) technique for detecting hidden damage inside a steel box girder bridge. The proposed technique allows (1) complete wireless excitation of piezoelectric transducers and noncontact sensing of the corresponding responses using laser beams, (2) autonomous damage visualization without comparing against baseline data previously accumulated from the pristine condition of a target structure and (3) robust damage diagnosis even for real structures with complex structural geometries. First, a new WUWI hardware system was developed by integrating optoelectronic-based signal transmitting and receiving devices and a scanning laser Doppler vibrometer. Next, a damage visualization algorithm, self-referencing f-k filter (SRF), was introduced to isolate and visualize only crack-induced ultrasonic modes from measured ultrasonic wavefield images. Finally, the performance of the proposed technique was validated through hidden crack visualization at a decommissioned Ramp-G Bridge in South Korea. The experimental results reveal that the proposed technique instantaneously detects and successfully visualizes hidden cracks even in the complex structure of a real bridge.
Singh, Monika; Bhoge, Rajesh K; Randhawa, Gurinderjit
2018-04-20
Background : Confirming the integrity of seed samples in powdered form is important priorto conducting a genetically modified organism (GMO) test. Rapid onsite methods may provide a technological solution to check for genetically modified (GM) events at ports of entry. In India, Bt cotton is the commercialized GM crop with four approved GM events; however, 59 GM events have been approved globally. GMO screening is required to test for authorized GM events. The identity and amplifiability of test samples could be ensured first by employing endogenous genes as an internal control. Objective : A rapid onsite detection method was developed for an endogenous reference gene, stearoyl acyl carrier protein desaturase ( Sad1 ) of cotton, employing visual and real-time loop-mediated isothermal amplification (LAMP). Methods : The assays were performed at a constant temperature of 63°C for 30 min for visual LAMP and 62ºC for 40 min for real-time LAMP. Positive amplification was visualized as a change in color from orange to green on addition of SYBR ® Green or detected as real-time amplification curves. Results : Specificity of LAMP assays was confirmed using a set of 10 samples. LOD for visual LAMP was up to 0.1%, detecting 40 target copies, and for real-time LAMP up to 0.05%, detecting 20 target copies. Conclusions : The developed methods could be utilized to confirm the integrity of seed powder prior to conducting a GMO test for specific GM events of cotton. Highlights : LAMP assays for the endogenous Sad1 gene of cotton have been developed to be used as an internal control for onsite GMO testing in cotton.
Effects of VDT workstation lighting conditions on operator visual workload.
Lin, Chiuhsiang Joe; Feng, Wen-Yang; Chao, Chin-Jung; Tseng, Feng-Yi
2008-04-01
Industrial lighting covers a wide range of different characteristics of working interiors and work tasks. This study investigated the effects of illumination on visual workload in visual display terminal (VDT) workstation. Ten college students (5 males and 5 females) were recruited as participants to perform VDT signal detection tasks. A randomized block design was utilized with four light colors (red, blue, green and white), two ambient illumination levels (20 lux and 340 lux), with the subject as the block. The dependent variables were the change of critical fusion frequency (CFF), visual acuity, reaction time of targets detection, error rates, and rating scores in a subjective questionnaire. The study results showed that both visual acuity and the subjective visual fatigue were significantly affected by the color of light. The illumination had significant effect on CFF threshold change and reaction time. Subjects prefer to perform VDT task under blue and white lights than green and red. Based on these findings, the study discusses and suggests ways of color lighting and ambient illumination to promote operators' visual performance and prevent visual fatigue effectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jung, Kyung oh; Biomedical Sciences, Seoul National University College of Medicine; Cancer Research Institute, Seoul National University College of Medicine
Despite an increasing need for methods to visualize intracellular proteins in vivo, the majority of antibody-based imaging methods available can only detect membrane proteins. The human telomerase reverse transcriptase (hTERT) is an intracellular target of great interest because of its high expression in several types of cancer. In this study, we developed a new probe for hTERT using the Tat peptide. An hTERT antibody (IgG or IgM) was conjugated with the Tat peptide, a fluorescence dye and {sup 64}Cu. HT29 (hTERT+) and U2OS (hTERT−) were used to visualize the intracellular hTERT. The hTERT was detected by RT-PCR and western blot. Fluorescencemore » signals for hTERT were obtained by confocal microscopy, live cell imaging, and analyzed by Tissue-FAXS. In nude mice, tumors were visualized using the fluorescence imaging devices Maestro™ and PETBOX. In RT-PCR and western blot, the expression of hTERT was detected in HT29 cells, but not in U2OS cells. Fluorescence signals were clearly observed in HT29 cells and in U2OS cells after 1 h of treatment, but signals were only detected in HT29 cells after 24 h. Confocal microscopy showed that 9.65% of U2OS and 78.54% of HT29 cells had positive hTERT signals. 3D animation images showed that the probe could target intranuclear hTERT in the nucleus. In mice models, fluorescence and PET imaging showed that hTERT in HT29 tumors could be efficiently visualized. In summary, we developed a new method to visualize intracellular and intranuclear proteins both in vitro and in vivo. - Highlights: • We developed new probes for imaging hTERT using Tat-conjugated IgM antibodies labeled with a fluorescent dye and radioisotope. • This probes could be used to overcome limitation of conventional antibody imaging system in live cell imaging. • This system could be applicable to monitor intracellular and intranuclear proteins in vitro and in vivo.« less
Berger, Jason; Upton, Colin; Springer, Elyah
2018-04-23
Visualization of nitrite residues is essential in gunshot distance determination. Current protocols for the detection of nitrites include, among other tests, the Modified Griess Test (MGT). This method is limited as nitrite residues are unstable in the environment and limited to partially burned gunpowder. Previous research demonstrated the ability of alkaline hydrolysis to convert nitrates to nitrites, allowing visualization of unburned gunpowder particles using the MGT. This is referred to as Total Nitrite Pattern Visualization (TNV). TNV techniques were modified and a study conducted to streamline the procedure outlined in the literature to maximize the efficacy of the TNV in casework, while reducing the required time from 1 h to 5 min, and enhancing effectiveness on blood-soiled samples. The TNV method was found to provide significant improvement in the ability to detect significant nitrite residues, without sacrificing efficiency, that would allow for the determination of the muzzle-to-target distance. © 2018 American Academy of Forensic Sciences.
Vision and Foraging in Cormorants: More like Herons than Hawks?
White, Craig R.; Day, Norman; Butler, Patrick J.; Martin, Graham R.
2007-01-01
Background Great cormorants (Phalacrocorax carbo L.) show the highest known foraging yield for a marine predator and they are often perceived to be in conflict with human economic interests. They are generally regarded as visually-guided, pursuit-dive foragers, so it would be expected that cormorants have excellent vision much like aerial predators, such as hawks which detect and pursue prey from a distance. Indeed cormorant eyes appear to show some specific adaptations to the amphibious life style. They are reported to have a highly pliable lens and powerful intraocular muscles which are thought to accommodate for the loss of corneal refractive power that accompanies immersion and ensures a well focussed image on the retina. However, nothing is known of the visual performance of these birds and how this might influence their prey capture technique. Methodology/Principal Findings We measured the aquatic visual acuity of great cormorants under a range of viewing conditions (illuminance, target contrast, viewing distance) and found it to be unexpectedly poor. Cormorant visual acuity under a range of viewing conditions is in fact comparable to unaided humans under water, and very inferior to that of aerial predators. We present a prey detectability model based upon the known acuity of cormorants at different illuminances, target contrasts and viewing distances. This shows that cormorants are able to detect individual prey only at close range (less than 1 m). Conclusions/Significance We conclude that cormorants are not the aquatic equivalent of hawks. Their efficient hunting involves the use of specialised foraging techniques which employ brief short-distance pursuit and/or rapid neck extension to capture prey that is visually detected or flushed only at short range. This technique appears to be driven proximately by the cormorant's limited visual capacities, and is analogous to the foraging techniques employed by herons. PMID:17653266
Electrophysiological evidence for parallel and serial processing during visual search.
Luck, S J; Hillyard, S A
1990-12-01
Event-related potentials were recorded from young adults during a visual search task in order to evaluate parallel and serial models of visual processing in the context of Treisman's feature integration theory. Parallel and serial search strategies were produced by the use of feature-present and feature-absent targets, respectively. In the feature-absent condition, the slopes of the functions relating reaction time and latency of the P3 component to set size were essentially identical, indicating that the longer reaction times observed for larger set sizes can be accounted for solely by changes in stimulus identification and classification time, rather than changes in post-perceptual processing stages. In addition, the amplitude of the P3 wave on target-present trials in this condition increased with set size and was greater when the preceding trial contained a target, whereas P3 activity was minimal on target-absent trials. These effects are consistent with the serial self-terminating search model and appear to contradict parallel processing accounts of attention-demanding visual search performance, at least for a subset of search paradigms. Differences in ERP scalp distributions further suggested that different physiological processes are utilized for the detection of feature presence and absence.
Brown, Stephen B R E; Slagter, Heleen A; van Noorden, Martijn S; Giltay, Erik J; van der Wee, Nic J A; Nieuwenhuis, Sander
2016-01-01
The specific role of neuromodulator systems in regulating rapid fluctuations of attention is still poorly understood. In this study, we examined the effects of clonidine and scopolamine on multiple target detection in a rapid serial visual presentation task to assess the role of the central noradrenergic and cholinergic systems in temporal attention. Eighteen healthy volunteers took part in a crossover double-dummy study in which they received clonidine (150/175 μg), scopolamine (1.2 mg), and placebo by mouth in counterbalanced order. A dual-target attentional blink task was administered at 120 min after scopolamine intake and 180 min after clonidine intake. The electroencephalogram was measured during task performance. Clonidine and scopolamine both impaired detection of the first target (T1). For clonidine, this impairment was accompanied by decreased amplitudes of the P2 and P3 components of the event-related potential. The drugs did not impair second-target (T2) detection, except if T2 was presented immediately after T1. The attentional blink for T2 was not affected, in line with a previous study that found no effect of clonidine on the attentional blink. These and other results suggest that clonidine and scopolamine may impair temporal attention through a decrease in tonic alertness and that this decrease in alertness can be temporarily compensated by a phasic alerting response to a salient stimulus. The comparable behavioral effects of clonidine and scopolamine are consistent with animal studies indicating close interactions between the noradrenergic and cholinergic neuromodulator systems.
USDA-ARS?s Scientific Manuscript database
Toxic heavy metals and radionuclides pose a growing, global threat to the environment. For an intelligent remediation design, reliable analytical tools for detection of relevant species are needed, such as PCR. However, PCR cannot visualize its targets and thus provide information about the morpholo...
Photoinitiator Nucleotide for Quantifying Nucleic Acid Hybridization
Johnson, Leah M.; Hansen, Ryan R.; Urban, Milan; Kuchta, Robert D.; Bowman, Christopher N.
2010-01-01
This first report of a photoinitiator-nucleotide conjugate demonstrates a novel approach for sensitive, rapid and visual detection of DNA hybridization events. This approach holds potential for various DNA labeling schemes and for applications benefiting from selective DNA-based polymerization initiators. Here, we demonstrate covalent, enzymatic incorporation of an eosin-photoinitiator 2′-deoxyuridine-5′-triphosphate (EITC-dUTP) conjugate into surface-immobilized DNA hybrids. Subsequent radical chain photoinitiation from these sites using an acrylamide/bis-acrylamide formulation yields a dynamic detection range between 500pM and 50nM of DNA target. Increasing EITC-nucleotide surface densities leads to an increase in surface-based polymer film heights until achieving a film height plateau of 280nm ±20nm at 610 ±70 EITC-nucleotides/μm2. Film heights of 10–20 nm were obtained from eosin surface densities of approximately 20 EITC-nucleotides/μm2 while below the detection limit of ~10 EITC-nucleotides/μm2, no detectable films were formed. This unique threshold behavior is utilized for instrument-free, visual quantification of target DNA concentration ranges. PMID:20337438
NASA Astrophysics Data System (ADS)
Meitzler, Thomas J.
The field of computer vision interacts with fields such as psychology, vision research, machine vision, psychophysics, mathematics, physics, and computer science. The focus of this thesis is new algorithms and methods for the computation of the probability of detection (Pd) of a target in a cluttered scene. The scene can be either a natural visual scene such as one sees with the naked eye (visual), or, a scene displayed on a monitor with the help of infrared sensors. The relative clutter and the temperature difference between the target and background (DeltaT) are defined and then used to calculate a relative signal -to-clutter ratio (SCR) from which the Pd is calculated for a target in a cluttered scene. It is shown how this definition can include many previous definitions of clutter and (DeltaT). Next, fuzzy and neural -fuzzy techniques are used to calculate the Pd and it is shown how these methods can give results that have a good correlation with experiment. The experimental design for actually measuring the Pd of a target by observers is described. Finally, wavelets are applied to the calculation of clutter and it is shown how this new definition of clutter based on wavelets can be used to compute the Pd of a target.
Category search speeds up face-selective fMRI responses in a non-hierarchical cortical face network.
Jiang, Fang; Badler, Jeremy B; Righi, Giulia; Rossion, Bruno
2015-05-01
The human brain is extremely efficient at detecting faces in complex visual scenes, but the spatio-temporal dynamics of this remarkable ability, and how it is influenced by category-search, remain largely unknown. In the present study, human subjects were shown gradually-emerging images of faces or cars in visual scenes, while neural activity was recorded using functional magnetic resonance imaging (fMRI). Category search was manipulated by the instruction to indicate the presence of either a face or a car, in different blocks, as soon as an exemplar of the target category was detected in the visual scene. The category selectivity of most face-selective areas was enhanced when participants were instructed to report the presence of faces in gradually decreasing noise stimuli. Conversely, the same regions showed much less selectivity when participants were instructed instead to detect cars. When "face" was the target category, the fusiform face area (FFA) showed consistently earlier differentiation of face versus car stimuli than did the "occipital face area" (OFA). When "car" was the target category, only the FFA showed differentiation of face versus car stimuli. These observations provide further challenges for hierarchical models of cortical face processing and show that during gradual revealing of information, selective category-search may decrease the required amount of information, enhancing and speeding up category-selective responses in the human brain. Copyright © 2015 Elsevier Ltd. All rights reserved.
Global Positioning System Synchronized Active Light Autonomous Docking System
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor); Bell, Joseph L. (Inventor)
1996-01-01
A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprising at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.
Global Positioning System Synchronized Active Light Autonomous Docking System
NASA Technical Reports Server (NTRS)
Howard, Richard (Inventor)
1994-01-01
A Global Positioning System Synchronized Active Light Autonomous Docking System (GPSSALADS) for automatically docking a chase vehicle with a target vehicle comprises at least one active light emitting target which is operatively attached to the target vehicle. The target includes a three-dimensional array of concomitantly flashing lights which flash at a controlled common frequency. The GPSSALADS further comprises a visual tracking sensor operatively attached to the chase vehicle for detecting and tracking the target vehicle. Its performance is synchronized with the flash frequency of the lights by a synchronization means which is comprised of first and second internal clocks operatively connected to the active light target and visual tracking sensor, respectively, for providing timing control signals thereto, respectively. The synchronization means further includes first and second Global Positioning System receivers operatively connected to the first and second internal clocks, respectively, for repeatedly providing simultaneous synchronization pulses to the internal clocks, respectively. In addition, the GPSSALADS includes a docking process controller means which is operatively attached to the chase vehicle and is responsive to the visual tracking sensor for producing commands for the guidance and propulsion system of the chase vehicle.
ERIC Educational Resources Information Center
LoBue, Vanessa
2010-01-01
Spiders are among the most common targets of fears and phobias in the world. In visual search tasks, adults detect their presence more rapidly than other kinds of stimuli. Reported here is an investigation of whether young children share this attentional bias for the detection of spiders. In a series of experiments, preschoolers and adults were…
Familiarity facilitates feature-based face processing.
Visconti di Oleggio Castello, Matteo; Wheeler, Kelsey G; Cipolli, Carlo; Gobbini, M Ida
2017-01-01
Recognition of personally familiar faces is remarkably efficient, effortless and robust. We asked if feature-based face processing facilitates detection of familiar faces by testing the effect of face inversion on a visual search task for familiar and unfamiliar faces. Because face inversion disrupts configural and holistic face processing, we hypothesized that inversion would diminish the familiarity advantage to the extent that it is mediated by such processing. Subjects detected personally familiar and stranger target faces in arrays of two, four, or six face images. Subjects showed significant facilitation of personally familiar face detection for both upright and inverted faces. The effect of familiarity on target absent trials, which involved only rejection of unfamiliar face distractors, suggests that familiarity facilitates rejection of unfamiliar distractors as well as detection of familiar targets. The preserved familiarity effect for inverted faces suggests that facilitation of face detection afforded by familiarity reflects mostly feature-based processes.
Application of Visual Attention in Seismic Attribute Analysis
NASA Astrophysics Data System (ADS)
He, M.; Gu, H.; Wang, F.
2016-12-01
It has been proved that seismic attributes can be used to predict reservoir. The joint of multi-attribute and geological statistics, data mining, artificial intelligence, further promote the development of the seismic attribute analysis. However, the existing methods tend to have multiple solutions and insufficient generalization ability, which is mainly due to the complex relationship between seismic data and geological information, and undoubtedly own partly to the methods applied. Visual attention is a mechanism model of the human visual system which can concentrate on a few significant visual objects rapidly, even in a mixed scene. Actually, the model qualify good ability of target detection and recognition. In our study, the targets to be predicted are treated as visual objects, and an object representation based on well data is made in the attribute dimensions. Then in the same attribute space, the representation is served as a criterion to search the potential targets outside the wells. This method need not predict properties by building up a complicated relation between attributes and reservoir properties, but with reference to the standard determined before. So it has pretty good generalization ability, and the problem of multiple solutions can be weakened by defining the threshold of similarity.
Attentional Shifts between Audition and Vision in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Occelli, Valeria; Esposito, Gianluca; Venuti, Paola; Arduino, Giuseppe Maurizio; Zampini, Massimiliano
2013-01-01
Previous evidence on neurotypical adults shows that the presentation of a stimulus allocates the attention to its modality, resulting in faster responses to a subsequent target presented in the same (vs. different) modality. People with Autism Spectrum Disorders (ASDs) often fail to detect a (visual or auditory) target in a stream of stimuli after…
2015-06-09
anomaly detection , which is generally considered part of high level information fusion (HLIF) involving temporal-geospatial data as well as meta-data... Anomaly detection in the Maritime defence and security domain typically focusses on trying to identify vessels that are behaving in an unusual...manner compared with lawful vessels operating in the area – an applied case of target detection among distractors. Anomaly detection is a complex problem
Children with Autism Detect Targets at Very Rapid Presentation Rates with Similar Accuracy as Adults
ERIC Educational Resources Information Center
Hagmann, Carl Erick; Wyble, Bradley; Shea, Nicole; LeBlanc, Megan; Kates, Wendy R.; Russo, Natalie
2016-01-01
Enhanced perception may allow for visual search superiority by individuals with Autism Spectrum Disorder (ASD), but does it occur over time? We tested high-functioning children with ASD, typically developing (TD) children, and TD adults in two tasks at three presentation rates (50, 83.3, and 116.7 ms/item) using rapid serial visual presentation.…
EEG and Eye Tracking Signatures of Target Encoding during Structured Visual Search
Brouwer, Anne-Marie; Hogervorst, Maarten A.; Oudejans, Bob; Ries, Anthony J.; Touryan, Jonathan
2017-01-01
EEG and eye tracking variables are potential sources of information about the underlying processes of target detection and storage during visual search. Fixation duration, pupil size and event related potentials (ERPs) locked to the onset of fixation or saccade (saccade-related potentials, SRPs) have been reported to differ dependent on whether a target or a non-target is currently fixated. Here we focus on the question of whether these variables also differ between targets that are subsequently reported (hits) and targets that are not (misses). Observers were asked to scan 15 locations that were consecutively highlighted for 1 s in pseudo-random order. Highlighted locations displayed either a target or a non-target stimulus with two, three or four targets per trial. After scanning, participants indicated which locations had displayed a target. To induce memory encoding failures, participants concurrently performed an aurally presented math task (high load condition). In a low load condition, participants ignored the math task. As expected, more targets were missed in the high compared with the low load condition. For both conditions, eye tracking features distinguished better between hits and misses than between targets and non-targets (with larger pupil size and shorter fixations for missed compared with correctly encoded targets). In contrast, SRP features distinguished better between targets and non-targets than between hits and misses (with average SRPs showing larger P300 waveforms for targets than for non-targets). Single trial classification results were consistent with these averages. This work suggests complementary contributions of eye and EEG measures in potential applications to support search and detect tasks. SRPs may be useful to monitor what objects are relevant to an observer, and eye variables may indicate whether the observer should be reminded of them later. PMID:28559807
Focused ultrasound: concept for automated transcutaneous control of hemorrhage in austere settings.
Kucewicz, John C; Bailey, Michael R; Kaczkowski, Peter J; Carter, Stephen J
2009-04-01
High intensity focused ultrasound (HIFU) is being developed for a range of clinical applications. Of particular interest to NASA and the military is the use of HIFU for traumatic injuries because HIFU has the unique ability to transcutaneously stop bleeding. Automation of this technology would make possible its use in remote, austere settings by personnel not specialized in medical ultrasound. Here a system to automatically detect and target bleeding is tested and reported. The system uses Doppler ultrasound images from a clinical ultrasound scanner for bleeding detection and hardware for HIFU therapy. The system was tested using a moving string to simulate blood flow and targeting was visualized by Schlieren imaging to show the focusing of the HIFU acoustic waves. When instructed by the operator, a Doppler ultrasound image is acquired and processed to detect and localize the moving string, and the focus of the HIFU array is electronically adjusted to target the string. Precise and accurate targeting was verified in the Schlieren images. An automated system to detect and target simulated bleeding has been built and tested. The system could be combined with existing algorithms to detect, target, and treat clinical bleeding.
Ogourtsova, Tatiana; Archambault, Philippe S; Lamontagne, Anouk
2018-04-23
Unilateral spatial neglect (USN), a highly prevalent and disabling post-stroke impairment, has been shown to affect the recovery of locomotor and navigation skills needed for community mobility. We recently found that USN alters goal-directed locomotion in conditions of different cognitive/perceptual demands. However, sensorimotor post-stroke dysfunction (e.g. decreased walking speed) could have influenced the results. Analogous to a previously used goal-directed locomotor paradigm, a seated, joystick-driven navigation experiment, minimizing locomotor demands, was employed in individuals with and without post-stroke USN (USN+ and USN-, respectively) and healthy controls (HC). Participants (n = 15 per group) performed a seated, joystick-driven navigation and detection time task to targets 7 m away at 0°, ±15°/30° in actual (visually-guided), remembered (memory-guided) and shifting (visually-guided with representational updating component) conditions while immersed in a 3D virtual reality environment. Greater end-point mediolateral errors to left-sided targets (remembered and shifting conditions) and overall lengthier onsets in reorientation strategy (shifting condition) were found for USN+ vs. USN- and vs. HC (p < 0.05). USN+ individuals mostly overshot left targets (- 15°/- 30°). Greater delays in detection time for target locations across the visual spectrum (left, middle and right) were found in USN+ vs. USN- and HC groups (p < 0.05). USN-related attentional-perceptual deficits alter navigation abilities in memory-guided and shifting conditions, independently of post-stroke locomotor deficits. Lateralized and non-lateralized deficits in object detection are found. The employed paradigm could be considered in the design and development of sensitive and functional assessment methods for neglect; thereby addressing the drawbacks of currently used traditional paper-and-pencil tools.
Grasp cueing and joint attention.
Tschentscher, Nadja; Fischer, Martin H
2008-10-01
We studied how two different hand posture cues affect joint attention in normal observers. Visual targets appeared over lateralized objects, with different delays after centrally presented hand postures. Attention was cued by either hand direction or the congruency between hand aperture and object size. Participants pressed a button when they detected a target. Direction cues alone facilitated target detection following short delays but aperture cues alone were ineffective. In contrast, when hand postures combined direction and aperture cues, aperture congruency effects without directional congruency effects emerged and persisted, but only for power grips. These results suggest that parallel parameter specification makes joint attention mechanisms exquisitely sensitive to the timing and content of contextual cues.
Interactive Tools for Measuring Visual Scanning Performance and Reaction Time
Seeanner, Julia; Hennessy, Sarah; Manganelli, Joseph; Crisler, Matthew; Rosopa, Patrick; Jenkins, Casey; Anderson, Michael; Drouin, Nathalie; Belle, Leah; Truesdail, Constance; Tanner, Stephanie
2017-01-01
Occupational therapists are constantly searching for engaging, high-technology interactive tasks that provide immediate feedback to evaluate and train clients with visual scanning deficits. This study examined the relationship between two tools: the VISION COACH™ interactive light board and the Functional Object Detection© (FOD) Advanced driving simulator scenario. Fifty-four healthy drivers, ages 21–66 yr, were divided into three age groups. Participants performed braking response and visual target (E) detection tasks of the FOD Advanced driving scenario, followed by two sets of three trials using the VISION COACH Full Field 60 task. Results showed no significant effect of age on FOD Advanced performance but a significant effect of age on VISION COACH performance. Correlations showed that participants’ performance on both braking and E detection tasks were significantly positively correlated with performance on the VISION COACH (.37 < r < .40, p < .01). These tools provide new options for therapists. PMID:28218598
Rapid and visual detection of Leptospira in urine by LigB-LAMP assay with pre-addition of dye.
Ali, Syed Atif; Kaur, Gurpreet; Boby, Nongthombam; Sabarinath, T; Solanki, Khushal; Pal, Dheeraj; Chaudhuri, Pallab
2017-12-01
Leptospirosis is considered to be the most widespread zoonotic disease caused by pathogenic species of Leptospira. The present study reports a novel set of primers targeting LigB gene for visual detection of pathogenic Leptospira in urine samples through Loop-mediated isothermal amplification (LAMP). The results were recorded by using Hydroxyl napthol blue (HNB), SYBR GREEN I and calcein. Analytical sensitivity of LAMP was as few as 10 leptospiral organisms in spiked urine samples from cattle and dog. LigB gene based LAMP, termed as LigB-LAMP, was found 10 times more sensitive than conventional PCR. The diagnostic specificity of LAMP was 100% when compared to SYBR green qPCR for detection of Leptospira in urine samples. Though qPCR was found more sensitive, the rapidity and simplicity in setting LAMP test followed by visual detection of Leptospira infection in clinical samples makes LigB-LAMP an alternative and favourable diagnostic tool in resource poor setting. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hecker, Elizabeth A.; Serences, John T.; Srinivasan, Ramesh
2013-01-01
Interacting with the environment requires the ability to flexibly direct attention to relevant features. We examined the degree to which individuals attend to visual features within and across Detection, Fine Discrimination, and Coarse Discrimination tasks. Electroencephalographic (EEG) responses were measured to an unattended peripheral flickering (4 or 6 Hz) grating while individuals (n = 33) attended to orientations that were offset by 0°, 10°, 20°, 30°, 40°, and 90° from the orientation of the unattended flicker. These unattended responses may be sensitive to attentional gain at the attended spatial location, since attention to features enhances early visual responses throughout the visual field. We found no significant differences in tuning curves across the three tasks in part due to individual differences in strategies. We sought to characterize individual attention strategies using hierarchical Bayesian modeling, which grouped individuals into families of curves that reflect attention to the physical target orientation (“on-channel”) or away from the target orientation (“off-channel”) or a uniform distribution of attention. The different curves were related to behavioral performance; individuals with “on-channel” curves had lower thresholds than individuals with uniform curves. Individuals with “off-channel” curves during Fine Discrimination additionally had lower thresholds than those assigned to uniform curves, highlighting the perceptual benefits of attending away from the physical target orientation during fine discriminations. Finally, we showed that a subset of individuals with optimal curves (“on-channel”) during Detection also demonstrated optimal curves (“off-channel”) during Fine Discrimination, indicating that a subset of individuals can modulate tuning optimally for detection and discrimination. PMID:23678013
Behavioural benefits of multisensory processing in ferrets.
Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R
2017-01-01
Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Modeling peripheral vision for moving target search and detection.
Yang, Ji Hyun; Huston, Jesse; Day, Michael; Balogh, Imre
2012-06-01
Most target search and detection models focus on foveal vision. In reality, peripheral vision plays a significant role, especially in detecting moving objects. There were 23 subjects who participated in experiments simulating target detection tasks in urban and rural environments while their gaze parameters were tracked. Button responses associated with foveal object and peripheral object (PO) detection and recognition were recorded. In an urban scenario, pedestrians appearing in the periphery holding guns were threats and pedestrians with empty hands were non-threats. In a rural scenario, non-U.S. unmanned aerial vehicles (UAVs) were considered threats and U.S. UAVs non-threats. On average, subjects missed detecting 2.48 POs among 50 POs in the urban scenario and 5.39 POs in the rural scenario. Both saccade reaction time and button reaction time can be predicted by peripheral angle and entrance speed of POs. Fast moving objects were detected faster than slower objects and POs appearing at wider angles took longer to detect than those closer to the gaze center. A second-order mixed-effect model was applied to provide each subject's prediction model for peripheral target detection performance as a function of eccentricity angle and speed. About half the subjects used active search patterns while the other half used passive search patterns. An interactive 3-D visualization tool was developed to provide a representation of macro-scale head and gaze movement in the search and target detection task. An experimentally validated stochastic model of peripheral vision in realistic target detection scenarios was developed.
Visual Search in Dementia with Lewy Bodies and Alzheimer’s Disease
Landy, Kelly M.; Salmon, David P.; Filoteo, J. Vincent; Heindel, William C.; Galasko, Douglas; Hamilton, Joanne M.
2016-01-01
Visual search is an aspect of visual cognition that may be more impaired in Dementia with Lewy Bodies (DLB) than Alzheimer’s disease (AD). To assess this possibility, the present study compared patients with DLB (n=17), AD (n=30), or Parkinson’s disease with dementia (PDD; n=10) to non-demented patients with PD (n=18) and normal control (NC) participants (n=13) on single-feature and feature-conjunction visual search tasks. In the single-feature task participants had to determine if a target stimulus (i.e., a black dot) was present among 3, 6, or 12 distractor stimuli (i.e., white dots) that differed in one salient feature. In the feature-conjunction task participants had to determine if a target stimulus (i.e., a black circle) was present among 3, 6, or 12 distractor stimuli (i.e., white dots and black squares) that shared either of the target’s salient features. Results showed that target detection time in the single-feature task was not influenced by the number of distractors (i.e., “pop-out” effect) for any of the groups. In contrast, target detection time increased as the number of distractors increased in the feature-conjunction task for all groups, but more so for patients with AD or DLB than for any of the other groups. These results suggest that the single-feature search “pop-out” effect is preserved in DLB and AD patients, whereas ability to perform the feature-conjunction search is impaired. This pattern of preserved single-feature search with impaired feature-conjunction search is consistent with a deficit in feature binding that may be mediated by abnormalities in networks involving the dorsal occipito-parietal cortex. PMID:26476402
Using multisensory cues to facilitate air traffic management.
Ngo, Mary K; Pierce, Russell S; Spence, Charles
2012-12-01
In the present study, we sought to investigate whether auditory and tactile cuing could be used to facilitate a complex, real-world air traffic management scenario. Auditory and tactile cuing provides an effective means of improving both the speed and accuracy of participants' performance in a variety of laboratory-based visual target detection and identification tasks. A low-fidelity air traffic simulation task was used in which participants monitored and controlled aircraft.The participants had to ensure that the aircraft landed or exited at the correct altitude, speed, and direction and that they maintained a safe separation from all other aircraft and boundaries. The performance measures recorded included en route time, handoff delay, and conflict resolution delay (the performance measure of interest). In a baseline condition, the aircraft in conflict was highlighted in red (visual cue), and in the experimental conditions, this standard visual cue was accompanied by a simultaneously presented auditory, vibrotactile, or audiotactile cue. Participants responded significantly more rapidly, but no less accurately, to conflicts when presented with an additional auditory or audiotactile cue than with either a vibrotactile or visual cue alone. Auditory and audiotactile cues have the potential for improving operator performance by reducing the time it takes to detect and respond to potential visual target events. These results have important implications for the design and use of multisensory cues in air traffic management.
Wang, Cong; Li, Rong; Quan, Sheng; Shen, Ping; Zhang, Dabing; Shi, Jianxin; Yang, Litao
2015-06-01
Isothermal DNA/RNA amplification techniques are the primary methodology for developing on-spot rapid nucleic acid amplification assays, and the loop-mediated isothermal amplification (LAMP) technique has been developed and applied in the detection of foodborne pathogens, plant/animal viruses, and genetically modified (GM) food/feed contents. In this study, one set of LAMP assays targeting on eight frequently used universal elements, marker genes, and exogenous target genes, such as CaMV35S promoter, FMV35S promoter, NOS, bar, cry1Ac, CP4 epsps, pat, and NptII, were developed for visual screening of GM contents in plant-derived food samples with high efficiency and accuracy. For these eight LAMP assays, their specificity was evaluated by testing commercial GM plant events and their limits of detection were also determined, which are 10 haploid genome equivalents (HGE) for FMV35S promoter, cry1Ac, and pat assays, as well as five HGE for CaMV35S promoter, bar, NOS terminator, CP4 epsps, and NptII assays. The screening applicability of these LAMP assays was further validated successfully using practical canola, soybean, and maize samples. The results suggested that the established visual LAMP assays are applicable and cost-effective for GM screening in plant-derived food samples.
Heim, Stefan; von Tongeln, Franziska; Hillen, Rebekka; Horbach, Josefine; Radach, Ralph; Günther, Thomas
2018-06-19
The Landolt paradigm is a visual scanning task intended to evoke reading-like eye-movements in the absence of orthographic or lexical information, thus allowing the dissociation of (sub-) lexical vs. visual processing. To that end, all letters in real word sentences are exchanged for closed Landolt rings, with 0, 1, or 2 open Landolt rings as targets in each Landolt sentence. A preliminary fMRI block-design study (Hillen et al. in Front Hum Neurosci 7:1-14, 2013) demonstrated that the Landolt paradigm has a special neural signature, recruiting the right IPS and SPL as part of the endogenous attention network. However, in that analysis, the brain responses to target detection could not be separated from those involved in processing Landolt stimuli without targets. The present study presents two fMRI experiments testing the question whether targets or the Landolt stimuli per se, led to the right IPS/SPL activation. Experiment 1 was an event-related re-analysis of the Hillen et al. (Front Hum Neurosci 7:1-14, 2013) data. Experiment 2 was a replication study with a new sample and identical procedures. In both experiments, the right IPS/SPL were recruited in the Landolt condition as compared to orthographic stimuli even in the absence of any target in the stimulus, indicating that the properties of the Landolt task itself trigger this right parietal activation. These findings are discussed against the background of behavioural and neuroimaging studies of healthy reading as well as developmental and acquired dyslexia. Consequently, this neuroimaging evidence might encourage the use of the Landolt paradigm also in the context of examining reading disorders, as it taps into the orientation of visual attention during reading-like scanning of stimuli without interfering sub-lexical information.
Spatially parallel processing of within-dimension conjunctions.
Linnell, K J; Humphreys, G W
2001-01-01
Within-dimension conjunction search for red-green targets amongst red-blue, and blue-green, nontargets is extremely inefficient (Wolfe et al, 1990 Journal of Experimental Psychology: Human Perception and Performance 16 879-892). We tested whether pairs of red-green conjunction targets can nevertheless be processed spatially in parallel. Participants made speeded detection responses whenever a red-green target was present. Across trials where a second identical target was present, the distribution of detection times was compatible with the assumption that targets were processed in parallel (Miller, 1982 Cognitive Psychology 14 247-279). We show that this was not an artifact of response-competition or feature-based processing. We suggest that within-dimension conjunctions can be processed spatially in parallel. Visual search for such items may be inefficient owing to within-dimension grouping between items.
Molecular magnetic resonance imaging of atherosclerotic vessel wall disease.
Nörenberg, Dominik; Ebersberger, Hans U; Diederichs, Gerd; Hamm, Bernd; Botnar, René M; Makowski, Marcus R
2016-03-01
Molecular imaging aims to improve the identification and characterization of pathological processes in vivo by visualizing the underlying biological mechanisms. Molecular imaging techniques are increasingly used to assess vascular inflammation, remodeling, cell migration, angioneogenesis and apoptosis. In cardiovascular diseases, molecular magnetic resonance imaging (MRI) offers new insights into the in vivo biology of pathological vessel wall processes of the coronary and carotid arteries and the aorta. This includes detection of early vascular changes preceding plaque development, visualization of unstable plaques and assessment of response to therapy. The current review focuses on recent developments in the field of molecular MRI to characterise different stages of atherosclerotic vessel wall disease. A variety of molecular MR-probes have been developed to improve the non-invasive detection and characterization of atherosclerotic plaques. Specifically targeted molecular probes allow for the visualization of key biological steps in the cascade leading to the development of arterial vessel wall lesions. Early detection of processes which lead to the development of atherosclerosis and the identification of vulnerable atherosclerotic plaques may enable the early assessment of response to therapy, improve therapy planning, foster the prevention of cardiovascular events and may open the door for the development of patient-specific treatment strategies. Targeted MR-probes allow the characterization of atherosclerosis on a molecular level. Molecular MRI can identify in vivo markers for the differentiation of stable and unstable plaques. Visualization of early molecular changes has the potential to improve patient-individualized risk-assessment.
Chen, Jinyang; Liu, Yucheng; Ji, Xinghu; He, Zhike
2016-09-15
In this work, a versatile dumbbell molecular (DM) probe was designed and employed in the sensitively homogeneous bioassay. In the presence of target molecule, the DM probe was protected from the digestion of exonucleases. Subsequently, the protected DM probe specifically bound to the intercalation dye and resulted in obvious fluorescence signal which was used to determine the target molecule in return. This design allows specific and versatile detection of diverse targets with easy operation and no sophisticated fluorescence labeling. Integrating the idea of target-protecting DM probe with adenosine triphosphate (ATP) involved ligation reaction, the DM probe with 5'-end phosphorylation was successfully constructed for ATP detection, and the limitation of detection was found to be 4.8 pM. Thanks to its excellent selectivity and sensitivity, this sensing strategy was used to detect ATP spiked in human serum as well as cellular ATP. Moreover, the proposed strategy was also applied in the visual detection of ATP in droplet-based microfluidic platform with satisfactory results. Similarly, combining the principle of target-protecting DM probe with streptavidin (SA)-biotin interaction, the DM probe with 3'-end biotinylation was developed for selective and sensitive SA determination, which demonstrated the robustness and versatility of this design. Copyright © 2016 Elsevier B.V. All rights reserved.
Hu, Bo; Guo, Jing; Xu, Ying; Wei, Hua; Zhao, Guojie; Guan, Yifu
2017-08-01
Rapid and accurate detection of microRNAs in biological systems is of great importance. Here, we report the development of a visual colorimetric assay which possesses the high amplification capabilities and high selectivity of the rolling circle amplification (RCA) method and the simplicity and convenience of gold nanoparticles used as a signal indicator. The designed padlock probe recognizes the target miRNA and is circularized, and then acts as the template to extend the target miRNA into a long single-stranded nucleotide chain of many tandem repeats of nucleotide sequences. Next, the RCA product is hybridized with oligonucleotides tagged onto gold nanoparticles. This interaction leads to the aggregation of gold nanoparticles, and the color of the system changes from wine red to dark blue according to the abundance of miRNA. A linear correlation between fluorescence and target oligonucleotide content was obtained in the range 0.3-300 pM, along with a detection limit of 0.13 pM (n = 7) and a RSD of 3.9% (30 pM, n = 9). The present approach provides a simple, rapid, and accurate visual colorimetric assay that allows sensitive biodetection and bioanalysis of DNA and RNA nucleotides of interest in biologically important samples. Graphical abstract The colorimetric assay system for analyzing target oligonucleotides.
Huang, Yishun; Fang, Luting; Zhu, Zhi; Ma, Yanli; Zhou, Leiji; Chen, Xi; Xu, Dunming; Yang, Chaoyong
2016-11-15
Due to uranium's increasing exploitation in nuclear energy and its toxicity to human health, it is of great significance to detect uranium contamination. In particular, development of a rapid, sensitive and portable method is important for personal health care for those who frequently come into contact with uranium ore mining or who investigate leaks at nuclear power plants. The most stable form of uranium in water is uranyl ion (UO2(2+)). In this work, a UO2(2+) responsive smart hydrogel was designed and synthesized for rapid, portable, sensitive detection of UO2(2+). A UO2(2+) dependent DNAzyme complex composed of substrate strand and enzyme strand was utilized to crosslink DNA-grafted polyacrylamide chains to form a DNA hydrogel. Colorimetric analysis was achieved by encapsulating gold nanoparticles (AuNPs) in the DNAzyme-crosslinked hydrogel to indicate the concentration of UO2(2+). Without UO2(2+), the enzyme strand is not active. The presence of UO2(2+) in the sample activates the enzyme strand and triggers the cleavage of the substrate strand from the enzyme strand, thereby decreasing the density of crosslinkers and destabilizing the hydrogel, which then releases the encapsulated AuNPs. As low as 100nM UO2(2+) was visually detected by the naked eye. The target-responsive hydrogel was also demonstrated to be applicable in natural water spiked with UO2(2+). Furthermore, to avoid the visual errors caused by naked eye observation, a previously developed volumetric bar-chart chip (V-Chip) was used to quantitatively detect UO2(2+) concentrations in water by encapsulating Au-Pt nanoparticles in the hydrogel. The UO2(2+) concentrations were visually quantified from the travelling distance of ink-bar on the V-Chip. The method can be used for portable and quantitative detection of uranium in field applications without skilled operators and sophisticated instruments. Copyright © 2016 Elsevier B.V. All rights reserved.
Easy detection of multiple Alexandrium species using DNA chromatography chip.
Nagai, Satoshi; Miyamoto, Shigehiko; Ino, Keita; Tajimi, Seisuke; Nishi, Hiromi; Tomono, Jun
2016-01-01
In this study, the Kaneka DNA chromatography chip (KDCC) for the Alexandrium species was successfully developed for simultaneous detection of five Alexandrium species. This method utilizes a DNA-DNA hybridization technology. In the PCR process, specifically designed tagged-primers are used, i.e. a forward primer consisting of a tag domain, which can conjugate with gold nanocolloids on the chip, and a primer domain, which can anneal/amplify the target sequence. However, the reverse primer consists of a tag domain, which can hybridize to the solid-phased capture probe on the chip, and a primer domain, which can anneal/amplify the target sequence. As a result, a red line that originates from gold nanocolloids appears as a positive signal on the chip, and the amplicon is detected visually by the naked eye. This technique is simple, because it is possible to visually detect the target species soon after (<5min) the application of 2μL of PCR amplicon and 65μL of development buffer to the sample pad of the chip. Further, this technique is relatively inexpensive and does not require expensive laboratory equipment, such as real-time Q-PCR machines or DNA microarray detectors, but a thermal cycler. Regarding the detection limit of KDCC for the five Alexandrium species, it varied among species and it was <0.1-10pg and equivalent to 5-500 copies of rRNA genes, indicating that the technique is sensitive enough for practical use to detect several cells of the target species from 1L of seawater. The detection sensitivity of KDCC was also evaluated with two different techniques, i.e. a multiplex-PCR and a digital DNA hybridization by digital DNA chip analyzer (DDCA), using natural plankton assemblages. There was no significant difference in the detection sensitivity among the three techniques, suggesting KDCC can be readily used to monitor the HAB species. Copyright © 2015 Elsevier B.V. All rights reserved.
Magnetoferritin nanoparticles for targeting and visualizing tumour tissues
NASA Astrophysics Data System (ADS)
Fan, Kelong; Cao, Changqian; Pan, Yongxin; Lu, Di; Yang, Dongling; Feng, Jing; Song, Lina; Liang, Minmin; Yan, Xiyun
2012-07-01
Engineered nanoparticles have been used to provide diagnostic, therapeutic and prognostic information about the status of disease. Nanoparticles developed for these purposes are typically modified with targeting ligands (such as antibodies, peptides or small molecules) or contrast agents using complicated processes and expensive reagents. Moreover, this approach can lead to an excess of ligands on the nanoparticle surface, and this causes non-specific binding and aggregation of nanoparticles, which decreases detection sensitivity. Here, we show that magnetoferritin nanoparticles (M-HFn) can be used to target and visualize tumour tissues without the use of any targeting ligands or contrast agents. Iron oxide nanoparticles are encapsulated inside a recombinant human heavy-chain ferritin (HFn) protein shell, which binds to tumour cells that overexpress transferrin receptor 1 (TfR1). The iron oxide core catalyses the oxidation of peroxidase substrates in the presence of hydrogen peroxide to produce a colour reaction that is used to visualize tumour tissues. We examined 474 clinical specimens from patients with nine types of cancer and verified that these nanoparticles can distinguish cancerous cells from normal cells with a sensitivity of 98% and specificity of 95%.
Wang, Hao; Crewther, Sheila G.; Liang, Minglong; Laycock, Robin; Yu, Tao; Alexander, Bonnie; Crewther, David P.; Wang, Jian; Yin, Zhengqin
2017-01-01
Strabismic amblyopia is now acknowledged to be more than a simple loss of acuity and to involve alterations in visually driven attention, though whether this applies to both stimulus-driven and goal-directed attention has not been explored. Hence we investigated monocular threshold performance during a motion salience-driven attention task involving detection of a coherent dot motion target in one of four quadrants in adult controls and those with strabismic amblyopia. Psychophysical motion thresholds were impaired for the strabismic amblyopic eye, requiring longer inspection time and consequently slower target speed for detection compared to the fellow eye or control eyes. We compared fMRI activation and functional connectivity between four ROIs of the occipital-parieto-frontal visual attention network [primary visual cortex (V1), motion sensitive area V5, intraparietal sulcus (IPS) and frontal eye fields (FEF)], during a suprathreshold version of the motion-driven attention task, and also a simple goal-directed task, requiring voluntary saccades to targets randomly appearing along a horizontal line. Activation was compared when viewed monocularly by controls and the amblyopic and its fellow eye in strabismics. BOLD activation was weaker in IPS, FEF and V5 for both tasks when viewing through the amblyopic eye compared to viewing through the fellow eye or control participants' non-dominant eye. No difference in V1 activation was seen between the amblyopic and fellow eye, nor between the two eyes of control participants during the motion salience task, though V1 activation was significantly less through the amblyopic eye than through the fellow eye and control group non-dominant eye viewing during the voluntary saccade task. Functional correlations of ROIs within the attention network were impaired through the amblyopic eye during the motion salience task, whereas this was not the case during the voluntary saccade task. Specifically, FEF showed reduced functional connectivity with visual cortical nodes during the motion salience task through the amblyopic eye, despite suprathreshold detection performance. This suggests that the reduced ability of the amblyopic eye to activate the frontal components of the attention networks may help explain the aberrant control of visual attention and eye movements in amblyopes. PMID:28484381
Jungblut, P W; Sierralta, W D
1998-04-01
Estradiol is released from the binding niche of the receptor and covalently arrested in the molecular vicinity by the Mannich reaction during target fixation in acetic acid/formaldehyde. The exposed steroid is freely accessible for appropriate antibodies. It can be visualized in sections by the second antibody/enzyme technique in high resolution and without enhancements.
Brain activation underlying threat detection to targets of different races.
Senholzi, Keith B; Depue, Brendan E; Correll, Joshua; Banich, Marie T; Ito, Tiffany A
2015-01-01
The current study examined blood oxygen level-dependent signal underlying racial differences in threat detection. During functional magnetic resonance imaging, participants determined whether pictures of Black or White individuals held weapons. They were instructed to make shoot responses when the picture showed armed individuals but don't shoot responses to unarmed individuals, with the cost of not shooting armed individuals being greater than that of shooting unarmed individuals. Participants were faster to shoot armed Blacks than Whites, but faster in making don't shoot responses to unarmed Whites than Blacks. Brain activity differed to armed versus unarmed targets depending on target race, suggesting different mechanisms underlying threat versus safety decisions. Anterior cingulate cortex was preferentially engaged for unarmed Whites than Blacks. Parietal and visual cortical regions exhibited greater activity for armed Blacks than Whites. Seed-based functional connectivity of the amygdala revealed greater coherence with parietal and visual cortices for armed Blacks than Whites. Furthermore, greater implicit Black-danger associations were associated with increased amygdala activation to armed Blacks, compared to armed Whites. Our results suggest that different neural mechanisms may underlie racial differences in responses to armed versus unarmed targets.
Retina-V1 model of detectability across the visual field
Bradley, Chris; Abrams, Jared; Geisler, Wilson S.
2014-01-01
A practical model is proposed for predicting the detectability of targets at arbitrary locations in the visual field, in arbitrary gray scale backgrounds, and under photopic viewing conditions. The major factors incorporated into the model include (a) the optical point spread function of the eye, (b) local luminance gain control (Weber's law), (c) the sampling array of retinal ganglion cells, (d) orientation and spatial frequency–dependent contrast masking, (e) broadband contrast masking, and (f) efficient response pooling. The model is tested against previously reported threshold measurements on uniform backgrounds (the ModelFest data set and data from Foley, Varadharajan, Koh, & Farias, 2007) and against new measurements reported here for several ModelFest targets presented on uniform, 1/f noise, and natural backgrounds at retinal eccentricities ranging from 0° to 10°. Although the model has few free parameters, it is able to account quite well for all the threshold measurements. PMID:25336179
Does apparent size capture attention in visual search? Evidence from the Muller-Lyer illusion.
Proulx, Michael J; Green, Monique
2011-11-23
Is perceived size a crucial factor for the bottom-up guidance of attention? Here, a visual search experiment was used to examine whether an irrelevantly longer object can capture attention when participants were to detect a vertical target item. The longer object was created by an apparent size manipulation, the Müller-Lyer illusion; however, all objects contained the same number of pixels. The vertical target was detected more efficiently when it was also perceived as the longer item that was defined by apparent size. Further analysis revealed that the longer Müller-Lyer object received a greater degree of attentional priority than published results for other features such as retinal size, luminance contrast, and the abrupt onset of a new object. The present experiment has demonstrated for the first time that apparent size can capture attention and, thus, provide bottom-up guidance on the basis of perceived salience.
NASA Astrophysics Data System (ADS)
Muggiolu, Giovanna; Pomorski, Michal; Claverie, Gérard; Berthet, Guillaume; Mer-Calfati, Christine; Saada, Samuel; Devès, Guillaume; Simon, Marina; Seznec, Hervé; Barberet, Philippe
2017-01-01
As well as being a significant source of environmental radiation exposure, α-particles are increasingly considered for use in targeted radiation therapy. A better understanding of α-particle induced damage at the DNA scale can be achieved by following their tracks in real-time in targeted living cells. Focused α-particle microbeams can facilitate this but, due to their low energy (up to a few MeV) and limited range, α-particles detection, delivery, and follow-up observations of radiation-induced damage remain difficult. In this study, we developed a thin Boron-doped Nano-Crystalline Diamond membrane that allows reliable single α-particles detection and single cell irradiation with negligible beam scattering. The radiation-induced responses of single 3 MeV α-particles delivered with focused microbeam are visualized in situ over thirty minutes after irradiation by the accumulation of the GFP-tagged RNF8 protein at DNA damaged sites.
The Effects of Load Carriage and Physical Fatigue on Cognitive Performance
Eddy, Marianna D.; Hasselquist, Leif; Giles, Grace; Hayes, Jacqueline F.; Howe, Jessica; Rourke, Jennifer; Coyne, Megan; O’Donovan, Meghan; Batty, Jessica; Brunyé, Tad T.; Mahoney, Caroline R.
2015-01-01
In the current study, ten participants walked for two hours while carrying no load or a 40 kg load. During the second hour, treadmill grade was manipulated between a constant downhill or changing between flat, uphill, and downhill grades. Throughout the prolonged walk, participants performed two cognitive tasks, an auditory go no/go task and a visual target detection task. The main findings were that the number of false alarms increased over time in the loaded condition relative to the unloaded condition on the go no/go auditory task. There were also shifts in response criterion towards responding yes and decreased sensitivity in responding in the loaded condition compared to the unloaded condition. In the visual target detection there were no reliable effects of load carriage in the overall analysis however, there were slower reaction times in the loaded compared to unloaded condition during the second hour. PMID:26154515
A new terminal guidance sensor system for asteroid intercept or rendezvous missions
NASA Astrophysics Data System (ADS)
Lyzhoft, Joshua; Basart, John; Wie, Bong
2016-02-01
This paper presents the initial conceptual study results of a new terminal guidance sensor system for asteroid intercept or rendezvous missions, which explores the use of visual, infrared, and radar devices. As was demonstrated by NASA's Deep Impact mission, visual cameras can be effectively utilized for hypervelocity intercept terminal guidance for a 5 kilometer target. Other systems such as Raytheon's EKV (Exoatmospheric Kill Vehicle) employ a different scheme that utilizes infrared target information to intercept ballistic missiles. Another example that uses infrared information is the NEOWISE telescope, which is used for asteroid detection and tracking. This paper describes the signal-to-noise ratio estimation problem for infrared sensors, minimum and maximum range of detection, and computational validation using GPU accelerated simulations. Small targets (50-100 m in diameter) are considered, and scaled polyhedron models of known objects, such as the Rosetta mission's Comet 67P/Churyumov-Gerasimenko, 101,955 Bennu, target of the OSIRIS-REx mission, and asteroid 433 Eros, are utilized. A parallelized ray tracing algorithm to simulate realistic surface-to-surface shadowing of a given celestial body is developed. By using the simulated models and parameters given from the formulation of the different sensors, impact mission scenarios are used to verify the feasibility for intercepting a small target.
Differential Sources for 2 Neural Signatures of Target Detection: An Electrocorticography Study.
Kam, J W Y; Szczepanski, S M; Canolty, R T; Flinker, A; Auguste, K I; Crone, N E; Kirsch, H E; Kuperman, R A; Lin, J J; Parvizi, J; Knight, R T
2018-01-01
Electrophysiology and neuroimaging provide conflicting evidence for the neural contributions to target detection. Scalp electroencephalography (EEG) studies localize the P3b event-related potential component mainly to parietal cortex, whereas neuroimaging studies report activations in both frontal and parietal cortices. We addressed this discrepancy by examining the sources that generate the target-detection process using electrocorticography (ECoG). We recorded ECoG activity from cortex in 14 patients undergoing epilepsy monitoring, as they performed an auditory or visual target-detection task. We examined target-related responses in 2 domains: high frequency band (HFB) activity and the P3b. Across tasks, we observed a greater proportion of electrodes that showed target-specific HFB power relative to P3b over frontal cortex, but their proportions over parietal cortex were comparable. Notably, there was minimal overlap in the electrodes that showed target-specific HFB and P3b activity. These results revealed that the target-detection process is characterized by at least 2 different neural markers with distinct cortical distributions. Our findings suggest that separate neural mechanisms are driving the differential patterns of activity observed in scalp EEG and neuroimaging studies, with the P3b reflecting EEG findings and HFB activity reflecting neuroimaging findings, highlighting the notion that target detection is not a unitary phenomenon. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Bijl, Piet; Toet, Alexander; Kooi, Frank L.
2016-10-01
Visual images of a civilian target ship on a sea background were produced using a CAD model. The total set consisted of 264 images and included 3 different color schemes, 2 ship viewing aspects, 5 sun illumination conditions, 2 sea reflection values, 2 ship positions with respect to the horizon and 3 values of atmospheric contrast reduction. In a perception experiment, the images were presented on a display in a long darkened corridor. Observers were asked to indicate the range at which they were able to detect the ship and classify the following 5 ship elements: accommodation, funnel, hull, mast, and hat above the bridge. This resulted in a total of 1584 Target Acquisition (TA) range estimates for two observers. Next, the ship contour, ship elements and corresponding TA ranges were analyzed applying several feature size and contrast measures. Most data coincide on a contrast versus angular size plot using (1) the long axis as characteristic ship/ship feature size and (2) local Weber contrast as characteristic ship/ship feature contrast. Finally, the data were compared with a variety of visual performance functions assumed to be representative for Target Acquisition: the TOD (Triangle Orientation Discrimination), MRC (Minimum Resolvable Contrast), CTF (Contrast Threshold Function), TTP (Targeting Task Performance) metric and circular disc detection data for the unaided eye (Blackwell). The results provide strong evidence for the TOD case: both position and slope of the TOD curve match the ship detection and classification data without any free parameter. In contrast, the MRC and CTF are too steep, the TTP and disc detection curves are too shallow and all these curves need an overall scaling factor in order to coincide with the ship and ship feature recognition data.
Harris, Joseph A; Donohue, Sarah E; Schoenfeld, Mircea A; Hopf, Jens-Max; Heinze, Hans-Jochen; Woldorff, Marty G
2016-08-15
Reward-associated visual features have been shown to capture visual attention, evidenced in faster and more accurate behavioral performance, as well as in neural responses reflecting lateralized shifts of visual attention to those features. Specifically, the contralateral N2pc event-related-potential (ERP) component that reflects attentional shifting exhibits increased amplitude in response to task-relevant targets containing a reward-associated feature. In the present study, we examined the automaticity of such reward-association effects using object-substitution masking (OSM) in conjunction with MEG measures of visual attentional shifts. In OSM, a visual-search array is presented, with the target item to be detected indicated by a surrounding mask (here, four surrounding squares). Delaying the offset of the target-surrounding four-dot mask relative to the offset of the rest of the target/distracter array disrupts the viewer's awareness of the target (masked condition), whereas simultaneous offsets do not (unmasked condition). Here we manipulated whether the color of the OSM target was or was not of a previously reward-associated color. By tracking reward-associated enhancements of behavior and the N2pc in response to masked targets containing a previously rewarded or unrewarded feature, the automaticity of attentional capture by reward could be probed. We found an enhanced N2pc response to targets containing a previously reward-associated color feature. Moreover, this enhancement of the N2pc by reward did not differ between masking conditions, nor did it differ as a function of the apparent visibility of the target within the masked condition. Overall, these results underscore the automaticity of attentional capture by reward-associated features, and demonstrate the ability of feature-based reward associations to shape attentional capture and allocation outside of perceptual awareness. Copyright © 2016 Elsevier Inc. All rights reserved.
The attentional blink in amblyopia.
Popple, Ariella V; Levi, Dennis M
2008-10-31
Amblyopia is a disorder of visual acuity in one eye, thought to arise from suppression by the other eye during development of the visual cortex. In the attentional blink, the second of two targets (T2) in a Rapid Serial Visual Presentation (RSVP) stream is difficult to detect and identify when it appears shortly but not immediately after the first target (T1). We investigated the attentional blink seen through amblyopic eyes and found that it was less finely tuned in time than when the 12 amblyopic observers viewed the stimuli with their preferred eyes. T2 performance was slightly better through amblyopic eyes two frames after T1 but worse one frame after T1. Previously (A. V. Popple & D. M. Levi, 2007), we showed that when the targets were red letters in a stream of gray letters (or vice versa), normal observers frequently confused T2 with the letters before and after it (neighbor errors). Observers viewing through their amblyopic eyes made significantly fewer neighbor errors and more T2 responses consisting of letters that were never presented. In normal observers, T1 (on the rare occasions when it was reported incorrectly) was often confused with the letter immediately after it. Viewing through their amblyopic eyes, observers with amblyopia made more responses to the letter immediately before T1. These results suggest that childhood suppression of the input from amblyopic eyes disrupts attentive processing. We hypothesize reduced connectivity between monocularly tuned lower visual areas, subcortical structures that drive foveal attention, and more frontal regions of the brain responsible for letter recognition and working memory. Perhaps when viewing through their amblyopic eyes, the observers were still processing the letter identity of a prior distractor when the color flash associated with the target was detected. After T1, unfocused temporal attention may have bound together erroneously the features of succeeding letters, resulting in the appearance of letters that were not actually presented. These findings highlight the role of early (monocular) visual processes in modulating the attentional blink, as well as the role of attention in amblyopic visual deficits.
Kawashima, Tomoya; Matsumoto, Eriko
2016-03-23
Items in working memory guide visual attention toward a memory-matching object. Recent studies have shown that when searching for an object this attentional guidance can be modulated by knowing the probability that the target will match an item in working memory. Here, we recorded the P3 and contralateral delay activity to investigate how top-down knowledge controls the processing of working memory items. Participants performed memory task (recognition only) and memory-or-search task (recognition or visual search) in which they were asked to maintain two colored oriented bars in working memory. For visual search, we manipulated the probability that target had the same color as memorized items (0, 50, or 100%). Participants knew the probabilities before the task. Target detection in 100% match condition was faster than that in 50% match condition, indicating that participants used their knowledge of the probabilities. We found that the P3 amplitude in 100% condition was larger than in other conditions and that contralateral delay activity amplitude did not vary across conditions. These results suggest that more attention was allocated to the memory items when observers knew in advance that their color would likely match a target. This led to better search performance despite using qualitatively equal working memory representations.
Rules infants look by: Testing the assumption of transitivity in visual salience.
Kibbe, Melissa M; Kaldy, Zsuzsa; Blaser, Erik
2018-01-01
What drives infants' attention in complex visual scenes? Early models of infant attention suggested that the degree to which different visual features were detectable determines their attentional priority. Here, we tested this by asking whether two targets - defined by different features, but each equally salient when evaluated independently - would drive attention equally when pitted head-to-head. In Experiment 1, we presented 6-month-old infants with an array of gabor patches in which a target region varied either in color or spatial frequency from the background. Using a forced-choice preferential-looking method, we measured how readily infants fixated the target as its featural difference from the background was parametrically increased. Then, in Experiment 2, we used these psychometric preference functions to choose values for color and spatial frequency targets that were equally salient (preferred), and pitted them against each other within the same display. We reasoned that, if salience is transitive, then the stimuli should be iso-salient and infants should therefore show no systematic preference for either stimulus. On the contrary, we found that infants consistently preferred the color-defined stimulus. This suggests that computing visual salience in more complex scenes needs to include factors above and beyond local salience values.
Vater, Christian; Kredel, Ralf; Hossner, Ernst-Joachim
2017-05-01
In the current study, dual-task performance is examined with multiple-object tracking as a primary task and target-change detection as a secondary task. The to-be-detected target changes in conditions of either change type (form vs. motion; Experiment 1) or change salience (stop vs. slowdown; Experiment 2), with changes occurring at either near (5°-10°) or far (15°-20°) eccentricities (Experiments 1 and 2). The aim of the study was to test whether changes can be detected solely with peripheral vision. By controlling for saccades and computing gaze distances, we could show that participants used peripheral vision to monitor the targets and, additionally, to perceive changes at both near and far eccentricities. Noticeably, gaze behavior was not affected by the actual target change. Detection rates as well as response times generally varied as a function of change condition and eccentricity, with faster detections for motion changes and near changes. However, in contrast to the effects found for motion changes, sharp declines in detection rates and increased response times were observed for form changes as a function of the eccentricities. This result can be ascribed to properties of the visual system, namely to the limited spatial acuity in the periphery and the comparably receptive motion sensitivity of peripheral vision. These findings show that peripheral vision is functional for simultaneous target monitoring and target-change detection as saccadic information suppression can be avoided and covert attention can be optimally distributed to all targets. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Incidental orthographic learning during a color detection task.
Protopapas, Athanassios; Mitsi, Anna; Koustoumbardis, Miltiadis; Tsitsopoulou, Sofia M; Leventi, Marianna; Seitz, Aaron R
2017-09-01
Orthographic learning refers to the acquisition of knowledge about specific spelling patterns forming words and about general biases and constraints on letter sequences. It is thought to occur by strengthening simultaneously activated visual and phonological representations during reading. Here we demonstrate that a visual perceptual learning procedure that leaves no time for articulation can result in orthographic learning evidenced in improved reading and spelling performance. We employed task-irrelevant perceptual learning (TIPL), in which the stimuli to be learned are paired with an easy task target. Assorted line drawings and difficult-to-spell words were presented in red color among sequences of other black-colored words and images presented in rapid succession, constituting a fast-TIPL procedure with color detection being the explicit task. In five experiments, Greek children in Grades 4-5 showed increased recognition of words and images that had appeared in red, both during and after the training procedure, regardless of within-training testing, and also when targets appeared in blue instead of red. Significant transfer to reading and spelling emerged only after increased training intensity. In a sixth experiment, children in Grades 2-3 showed generalization to words not presented during training that carried the same derivational affixes as in the training set. We suggest that reinforcement signals related to detection of the target stimuli contribute to the strengthening of orthography-phonology connections beyond earlier levels of visually-based orthographic representation learning. These results highlight the potential of perceptual learning procedures for the reinforcement of higher-level orthographic representations. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Nanometer-Sized Diamond Particle as a Probe for Biolabeling
Chao, Jui-I.; Perevedentseva, Elena; Chung, Pei-Hua; Liu, Kuang-Kai; Cheng, Chih-Yuan; Chang, Chia-Ching; Cheng, Chia-Liang
2007-01-01
A novel method is proposed using nanometer-sized diamond particles as detection probes for biolabeling. The advantages of nanodiamond's unique properties were demonstrated in its biocompatibility, nontoxicity, easily detected Raman signal, and intrinsic fluorescence from its natural defects without complicated pretreatments. Carboxylated nanodiamond's (cND's) penetration ability, noncytotoxicity, and visualization of cND-cell interactions are demonstrated on A549 human lung epithelial cells. Protein-targeted cell interaction visualization was demonstrated with cND-lysozyme complex interaction with bacteria Escherichia coli. It is shown that the developed biomolecule-cND complex preserves the original functions of the test protein. The easily detected natural fluorescent and Raman intrinsic signals, penetration ability, and low cytotoxicity of cNDs render them promising agents in multiple medical applications. PMID:17513352
Camouflage and visual perception
Troscianko, Tom; Benton, Christopher P.; Lovell, P. George; Tolhurst, David J.; Pizlo, Zygmunt
2008-01-01
How does an animal conceal itself from visual detection by other animals? This review paper seeks to identify general principles that may apply in this broad area. It considers mechanisms of visual encoding, of grouping and object encoding, and of search. In most cases, the evidence base comes from studies of humans or species whose vision approximates to that of humans. The effort is hampered by a relatively sparse literature on visual function in natural environments and with complex foraging tasks. However, some general constraints emerge as being potentially powerful principles in understanding concealment—a ‘constraint’ here means a set of simplifying assumptions. Strategies that disrupt the unambiguous encoding of discontinuities of intensity (edges), and of other key visual attributes, such as motion, are key here. Similar strategies may also defeat grouping and object-encoding mechanisms. Finally, the paper considers how we may understand the processes of search for complex targets in complex scenes. The aim is to provide a number of pointers towards issues, which may be of assistance in understanding camouflage and concealment, particularly with reference to how visual systems can detect the shape of complex, concealed objects. PMID:18990671
Huang, Liqiang
2015-05-01
Basic visual features (e.g., color, orientation) are assumed to be processed in the same general way across different visual tasks. Here, a significant deviation from this assumption was predicted on the basis of the analysis of stimulus spatial structure, as characterized by the Boolean-map notion. If a task requires memorizing the orientations of a set of bars, then the map consisting of those bars can be readily used to hold the overall structure in memory and will thus be especially useful. If the task requires visual search for a target, then the map, which contains only an overall structure, will be of little use. Supporting these predictions, the present study demonstrated that in comparison to stimulus colors, bar orientations were processed more efficiently in change-detection tasks but less efficiently in visual search tasks (Cohen's d = 4.24). In addition to offering support for the role of the Boolean map in conscious access, the present work also throws doubts on the generality of processing visual features. © The Author(s) 2015.
Establishing the behavioural limits for countershaded camouflage.
Penacchio, Olivier; Harris, Julie M; Lovell, P George
2017-10-20
Countershading is a ubiquitous patterning of animals whereby the side that typically faces the highest illumination is darker. When tuned to specific lighting conditions and body orientation with respect to the light field, countershading minimizes the gradient of light the body reflects by counterbalancing shadowing due to illumination, and has therefore classically been thought of as an adaptation for visual camouflage. However, whether and how crypsis degrades when body orientation with respect to the light field is non-optimal has never been studied. We tested the behavioural limits on body orientation for countershading to deliver effective visual camouflage. We asked human participants to detect a countershaded target in a simulated three-dimensional environment. The target was optimally coloured for crypsis in a reference orientation and was displayed at different orientations. Search performance dramatically improved for deviations beyond 15 degrees. Detection time was significantly shorter and accuracy significantly higher than when the target orientation matched the countershading pattern. This work demonstrates the importance of maintaining body orientation appropriate for the displayed camouflage pattern, suggesting a possible selective pressure for animals to orient themselves appropriately to enhance crypsis.
Baker, Daniel H; Meese, Tim S
2016-07-27
Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50-100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures.
Baker, Daniel H.; Meese, Tim S.
2016-01-01
Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50–100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures. PMID:27460430
How do visual and postural cues combine for self-tilt perception during slow pitch rotations?
Scotto Di Cesare, C; Buloup, F; Mestre, D R; Bringoux, L
2014-11-01
Self-orientation perception relies on the integration of multiple sensory inputs which convey spatially-related visual and postural cues. In the present study, an experimental set-up was used to tilt the body and/or the visual scene to investigate how these postural and visual cues are integrated for self-tilt perception (the subjective sensation of being tilted). Participants were required to repeatedly rate a confidence level for self-tilt perception during slow (0.05°·s(-1)) body and/or visual scene pitch tilts up to 19° relative to vertical. Concurrently, subjects also had to perform arm reaching movements toward a body-fixed target at certain specific angles of tilt. While performance of a concurrent motor task did not influence the main perceptual task, self-tilt detection did vary according to the visuo-postural stimuli. Slow forward or backward tilts of the visual scene alone did not induce a marked sensation of self-tilt contrary to actual body tilt. However, combined body and visual scene tilt influenced self-tilt perception more strongly, although this effect was dependent on the direction of visual scene tilt: only a forward visual scene tilt combined with a forward body tilt facilitated self-tilt detection. In such a case, visual scene tilt did not seem to induce vection but rather may have produced a deviation of the perceived orientation of the longitudinal body axis in the forward direction, which may have lowered the self-tilt detection threshold during actual forward body tilt. Copyright © 2014 Elsevier B.V. All rights reserved.
Effects of Part-based Similarity on Visual Search: The Frankenbear Experiment
Alexander, Robert G.; Zelinsky, Gregory J.
2012-01-01
Do the target-distractor and distractor-distractor similarity relationships known to exist for simple stimuli extend to real-world objects, and are these effects expressed in search guidance or target verification? Parts of photorealistic distractors were replaced with target parts to create four levels of target-distractor similarity under heterogenous and homogenous conditions. We found that increasing target-distractor similarity and decreasing distractor-distractor similarity impaired search guidance and target verification, but that target-distractor similarity and heterogeneity/homogeneity interacted only in measures of guidance; distractor homogeneity lessens effects of target-distractor similarity by causing gaze to fixate the target sooner, not by speeding target detection following its fixation. PMID:22227607
Time to learn: evidence for two types of attentional guidance in contextual cueing.
Ogawa, Hirokazu; Watanabe, Katsumi
2010-01-01
Repetition of the same spatial configurations of a search display implicitly facilitates performance of a visual-search task when the target location in the display is fixed. The improvement of performance is referred to as contextual cueing. We examined whether the association process between target location and surrounding configuration of distractors occurs during active search or at the instant the target is found. To dissociate these two processes, we changed the surrounding configuration of the distractors at the instant of target detection so that the layout where the participants had searched for the target and the layout presented at the instant of target detection differed. The results demonstrated that both processes are responsible for the contextual-cueing effect, but they differ in the accuracies of attentional guidance and their time courses, suggesting that two different types of attentional-guidance processes may be involved in contextual cueing.
Bader, Chris; Jesudoss Chelladurai, Jeba; Starling, David E; Jones, Douglas E; Brewer, Matthew T
2017-10-01
Control of parasitic infections may be achieved by eliminating developmental stages present within intermediate hosts, thereby disrupting the parasite life cycle. For several trematodes relevant to human and veterinary medicine, this involves targeting the metacercarial stage found in fish intermediate hosts. Treatment of fish with praziquantel is one potential approach for targeting the metacercaria stage. To date, studies investigating praziquantel-induced metacercarial death in fish rely on counting parasites and visually assessing morphology or movement. In this study, we investigate quantitative methods for detecting praziquantel-induced death using a Posthodiplostomum minimum model. Our results revealed that propidium iodide staining accurately identified praziquantel-induced death and the level of staining was proportional to the concentration of praziquantel. In contrast, detection of ATP, resazurin metabolism, and trypan blue staining were poor indicators of metacercarial death. The propidium iodide method offers an advantage over simple visualization of parasite movement and could be used to determine EC 50 values relevant for comparison of praziquantel sensitivity or resistance. Copyright © 2017 Elsevier Inc. All rights reserved.
Cuadrado, Angeles; Golczyk, Hieronim; Jouve, Nicolás
2009-01-01
We report a new technique-nondenaturing FISH (ND-FISH)-for the rapid detection of plant telomeres without the need for prior denaturation of the chromosomes. In its development, two modified, synthetic oligonucleotides, 21 nt in length, fluorescently labelled at their 5' and 3' ends and complementary to either the cytidine-rich (C(3)TA(3)) or guanosine-rich (T(3)AG(3)) telomeric DNA strands, were used as probes. The high binding affinity of these probes and the short hybridization time required allows the visualization of plant telomeres in less than an hour. In tests, both probes gave strong signals visualized as double spots at both chromosome ends; this was true of both the mitotic and meiotic chromosomes of barley, wheat, rye, maize, Brachypodium distachyon and Rhoeo spathacea. They were also able to detect telomere motifs at certain intercalary sites in the chromosomes of R. spathacea. To investigate the nature of the target structures detected, the chromosomes were treated with RNase A and single strand-specific nuclease S1 before ND-FISH experiments. Signal formation was resistant to standard enzymatic treatment, but sensitive when much higher enzyme concentrations were used. The results are discussed in relation to current knowledge of telomere structure.
Detection of Brain Reorganization in Pediatric Multiple Sclerosis Using Functional MRI
2015-10-01
accomplish this, we apply comparative assessments of fMRI mappings of language, memory , and motor function, and performance on clinical neurocognitive...community at a target rate of 13 volunteers per quarter period; acquire fMRI data for language, memory , and visual-motor functions (months 3-12). c...consensus fMRI activation maps for language, memory , and visual-motor tasks (months 8-12). f) Subtask 1f. Prepare publication to disseminate our
A Survey of Visualization Tools Assessed for Anomaly-Based Intrusion Detection Analysis
2014-04-01
objective? • What vulnerabilities exist in the target system? • What damage or other consequences are likely? • What exploit scripts or other attack...languages C, R, and Python; no response capabilities. JUNG https://blogs.reucon.com/asterisk- java /tag/visualization/ Create custom layouts and can...annotate graphs, links, nodes with any Java data type. Must be familiar with coding in Java to call the routines; no monitoring or response
Zsido, Andras N; Deak, Anita; Losonci, Adrienn; Stecina, Diana; Arato, Akos; Bernath, Laszlo
2018-04-01
Numerous objects and animals could be threatening, and thus, children learn to avoid them early. Spiders and syringes are among the most common targets of fears and phobias of the modern word. However, they are of different origins: while the former is evolutionary relevant, the latter is not. We sought to investigate the underlying mechanisms that make the quick detection of such stimuli possible and enable the impulse to avoid them in the future. The respective categories of threatening and non-threatening targets were similar in shape, while low-level visual features were controlled. Our results showed that children found threatening cues faster, irrespective of the evolutionary age of the cues. However, they detected non-threatening evolutionary targets faster than non-evolutionary ones. We suggest that the underlying mechanism may be different: general feature detection can account for finding evolutionary threatening cues quickly, while specific features detection is more appropriate for modern threatening stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.
Yeh, M; Wickens, C D; Seagull, F J
1999-12-01
Two experiments were performed to examine how frame of reference (world-referenced vs. screen-referenced) and target expectancy can modulate the effects of target cuing in directing attention for see-through helmet-mounted displays (HMDs). In the first experiment, the degree of world referencing was varied by the spatial accuracy of the cue; in the second, the degree of world referencing was varied more radically between a world-referenced HMD and a hand-held display. Participants were asked to detect, identify, and give azimuth information for targets hidden in terrain presented in the far domain (i.e., the world) while performing a monitoring task in the near domain (i.e., the display). The results of both experiments revealed a cost-benefit trade-off for cuing such that the presence of cuing aided the target detection task for expected targets but drew attention away from the presence of unexpected targets in the environment. Analyses support the observation that this effect can be mediated by the display: The world-referenced display reduced the cost of cognitive tunneling relative to the screen-referenced display in Experiment 1; this cost was further reduced in Experiment 2 when participants were using a hand-held display. Potential applications of this research include important design guidelines and specifications for automated target recognition systems as well as any terrain-and-targeting display system in which superimposed symbology is included, specifically in assessing the costs and benefits of attentional cuing and the means by which this information is displayed.
Ohyama, Junji; Watanabe, Katsumi
2016-01-01
We examined how the temporal and spatial predictability of a task-irrelevant visual event affects the detection and memory of a visual item embedded in a continuously changing sequence. Participants observed 11 sequentially presented letters, during which a task-irrelevant visual event was either present or absent. Predictabilities of spatial location and temporal position of the event were controlled in 2 × 2 conditions. In the spatially predictable conditions, the event occurred at the same location within the stimulus sequence or at another location, while, in the spatially unpredictable conditions, it occurred at random locations. In the temporally predictable conditions, the event timing was fixed relative to the order of the letters, while in the temporally unpredictable condition; it could not be predicted from the letter order. Participants performed a working memory task and a target detection reaction time (RT) task. Memory accuracy was higher for a letter simultaneously presented at the same location as the event in the temporally unpredictable conditions, irrespective of the spatial predictability of the event. On the other hand, the detection RTs were only faster for a letter simultaneously presented at the same location as the event when the event was both temporally and spatially predictable. Thus, to facilitate ongoing detection processes, an event must be predictable both in space and time, while memory processes are enhanced by temporally unpredictable (i.e., surprising) events. Evidently, temporal predictability has differential effects on detection and memory of a visual item embedded in a sequence of images. PMID:26869966
Ohyama, Junji; Watanabe, Katsumi
2016-01-01
We examined how the temporal and spatial predictability of a task-irrelevant visual event affects the detection and memory of a visual item embedded in a continuously changing sequence. Participants observed 11 sequentially presented letters, during which a task-irrelevant visual event was either present or absent. Predictabilities of spatial location and temporal position of the event were controlled in 2 × 2 conditions. In the spatially predictable conditions, the event occurred at the same location within the stimulus sequence or at another location, while, in the spatially unpredictable conditions, it occurred at random locations. In the temporally predictable conditions, the event timing was fixed relative to the order of the letters, while in the temporally unpredictable condition; it could not be predicted from the letter order. Participants performed a working memory task and a target detection reaction time (RT) task. Memory accuracy was higher for a letter simultaneously presented at the same location as the event in the temporally unpredictable conditions, irrespective of the spatial predictability of the event. On the other hand, the detection RTs were only faster for a letter simultaneously presented at the same location as the event when the event was both temporally and spatially predictable. Thus, to facilitate ongoing detection processes, an event must be predictable both in space and time, while memory processes are enhanced by temporally unpredictable (i.e., surprising) events. Evidently, temporal predictability has differential effects on detection and memory of a visual item embedded in a sequence of images.
McIntyre, Scott E; Gugerty, Leo
2014-06-01
This field experiment takes a novel approach in applying methodologies and theories of visual search to the subject of conspicuity in automobile rear lighting. Traditional rear lighting research has not used the visual search paradigm in experimental design. It is our claim that the visual search design uniquely uncovers visual attention processes operating when drivers search the visual field that current designs fail to capture. This experiment is a validation and extension of previous simulator research on this same topic and demonstrates that detection of red automobile brake lamps will be improved if tail lamps are another color (in this test, amber) rather than the currently mandated red. Results indicate that when drivers miss brake lamp onset in low ambient light, RT and error are reduced in detecting the presence and absence of red brake lamps with multiple lead vehicles when tail lamps are not red compared to current rear lighting which mandates red tail lamps. This performance improvement is attributed to efficient visual processing that automatically segregates tail (amber) and brake (red) lamp colors into distractors and targets respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
Target detection in GPR data using joint low-rank and sparsity constraints
NASA Astrophysics Data System (ADS)
Bouzerdoum, Abdesselam; Tivive, Fok Hing Chi; Abeynayake, Canicious
2016-05-01
In ground penetrating radars, background clutter, which comprises the signals backscattered from the rough, uneven ground surface and the background noise, impairs the visualization of buried objects and subsurface inspections. In this paper, a clutter mitigation method is proposed for target detection. The removal of background clutter is formulated as a constrained optimization problem to obtain a low-rank matrix and a sparse matrix. The low-rank matrix captures the ground surface reflections and the background noise, whereas the sparse matrix contains the target reflections. An optimization method based on split-Bregman algorithm is developed to estimate these two matrices from the input GPR data. Evaluated on real radar data, the proposed method achieves promising results in removing the background clutter and enhancing the target signature.
Radar study of seabirds and bats on windward Hawai'i
Reynolds, M.H.; Cooper, B.A.; Day, Robert H.
1997-01-01
Modified marine surveillance radar was used to study the presence/ absence, abundance, and flight activity of four nocturnal species: Hawaiian darkrumped petrel [Pterodroma phaeopygia sandwichensis (Ridgeway)], Newell's shearwater [Puffinus auricularis newelli (Henshaw)], Band-rumped storm-petrel [Oceanodroma castro (Harcourt)], and Hawaiian hoary bat (Lasiurus cinereus semotus Sanborn & Crespo). Hawaiian seabirds were recorded flying to or from inland nesting colonies at seven sampling sites on the windward side of the island of Hawai'i. In total, 527 radar "targets" identified as petrel or shearwater-type on the basis of speed, flight behavior, and radar signal strength were observed during eight nights of sampling. Mean movement rates (targets per minute) for seabird targets were 0.1, 0.1, 0.3, 3.8, 0.9, and 2.2 for surveys at Kahakai, Kapoho, Mauna Loa, Pali Uli, Pu'ulena Crater, and Waipi'o Valley, respectively. Two percent of the petrel and shearwater-type targets detected on radar were confirmed visually or aurally. Flight paths for seabird targets showed strong directionality at six sampling sites. Mean flight speed for seabird targets (n = 524) was 61 km/hr for all survey areas. Peak detection times for seabirds were from 0430 to 0530 hours for birds flying to sea and 2000 to 2150 hours for birds returning to colonies. Most inland, low-elevation sampling sites could not be surveyed reliably for seabirds during the evening activity periods because of radar interference from insects and rapidly flying bats. At those inland sites predawn sampling was the best time for using radar to detect Hawaiian seabirds moving seaward. Hawaiian hoary bats were recorded at eight sampling sites. Eighty-six to 89 radar targets that exhibited erratic flight behavior were identified as "batlike" targets; 17% of these batlike radar targets were confirmed visually. Band-rumped storm-petrels were not identified during our surveys.
The footprints of visual attention in the Posner cueing paradigm revealed by classification images
NASA Technical Reports Server (NTRS)
Eckstein, Miguel P.; Shimozaki, Steven S.; Abbey, Craig K.
2002-01-01
In the Posner cueing paradigm, observers' performance in detecting a target is typically better in trials in which the target is present at the cued location than in trials in which the target appears at the uncued location. This effect can be explained in terms of a Bayesian observer where visual attention simply weights the information differently at the cued (attended) and uncued (unattended) locations without a change in the quality of processing at each location. Alternatively, it could also be explained in terms of visual attention changing the shape of the perceptual filter at the cued location. In this study, we use the classification image technique to compare the human perceptual filters at the cued and uncued locations in a contrast discrimination task. We did not find statistically significant differences between the shapes of the inferred perceptual filters across the two locations, nor did the observed differences account for the measured cueing effects in human observers. Instead, we found a difference in the magnitude of the classification images, supporting the idea that visual attention changes the weighting of information at the cued and uncued location, but does not change the quality of processing at each individual location.
Event-related potentials during visual selective attention in children of alcoholics.
van der Stelt, O; Gunning, W B; Snel, J; Kok, A
1998-12-01
Event-related potentials were recorded from 7- to 18-year-old children of alcoholics (COAs, n = 50) and age- and sex-matched control children (n = 50) while they performed a visual selective attention task. The task was to attend selectively to stimuli with a specified color (red or blue) in an attempt to detect the occurrence of target stimuli. COAs manifested a smaller P3b amplitude to attended-target stimuli over the parietal and occipital scalp than did the controls. A more specific analysis indicated that both the attentional relevance and the target properties of the eliciting stimulus determined the observed P3b amplitude differences between COAs and controls. In contrast, no significant group differences were observed in attention-related earlier occurring event-related potential components, referred to as frontal selection positivity, selection negativity, and N2b. These results represent neurophysiological evidence that COAs suffer from deficits at a late (semantic) level of visual selective information processing that are unlikely a consequence of deficits at earlier (sensory) levels of selective processing. The findings support the notion that a reduced visual P3b amplitude in COAs represents a high-level processing dysfunction indicating their increased vulnerability to alcoholism.
Spatial decoupling of targets and flashing stimuli for visual brain-computer interfaces
NASA Astrophysics Data System (ADS)
Waytowich, Nicholas R.; Krusienski, Dean J.
2015-06-01
Objective. Recently, paradigms using code-modulated visual evoked potentials (c-VEPs) have proven to achieve among the highest information transfer rates for noninvasive brain-computer interfaces (BCIs). One issue with current c-VEP paradigms, and visual-evoked paradigms in general, is that they require direct foveal fixation of the flashing stimuli. These interfaces are often visually unpleasant and can be irritating and fatiguing to the user, thus adversely impacting practical performance. In this study, a novel c-VEP BCI paradigm is presented that attempts to perform spatial decoupling of the targets and flashing stimuli using two distinct concepts: spatial separation and boundary positioning. Approach. For the paradigm, the flashing stimuli form a ring that encompasses the intended non-flashing targets, which are spatially separated from the stimuli. The user fixates on the desired target, which is classified using the changes to the EEG induced by the flashing stimuli located in the non-foveal visual field. Additionally, a subset of targets is also positioned at or near the stimulus boundaries, which decouples targets from direct association with a single stimulus. This allows a greater number of target locations for a fixed number of flashing stimuli. Main results. Results from 11 subjects showed practical classification accuracies for the non-foveal condition, with comparable performance to the direct-foveal condition for longer observation lengths. Online results from 5 subjects confirmed the offline results with an average accuracy across subjects of 95.6% for a 4-target condition. The offline analysis also indicated that targets positioned at or near the boundaries of two stimuli could be classified with the same accuracy as traditional superimposed (non-boundary) targets. Significance. The implications of this research are that c-VEPs can be detected and accurately classified to achieve comparable BCI performance without requiring potentially irritating direct foveation of flashing stimuli. Furthermore, this study shows that it is possible to increase the number of targets beyond the number of stimuli without degrading performance. Given the superior information transfer rate of c-VEP paradigms, these results can lead to the development of more practical and ergonomic BCIs.
Fast cat-eye effect target recognition based on saliency extraction
NASA Astrophysics Data System (ADS)
Li, Li; Ren, Jianlin; Wang, Xingbin
2015-09-01
Background complexity is a main reason that results in false detection in cat-eye target recognition. Human vision has selective attention property which can help search the salient target from complex unknown scenes quickly and precisely. In the paper, we propose a novel cat-eye effect target recognition method named Multi-channel Saliency Processing before Fusion (MSPF). This method combines traditional cat-eye target recognition with the selective characters of visual attention. Furthermore, parallel processing enables it to achieve fast recognition. Experimental results show that the proposed method performs better in accuracy, robustness and speed compared to other methods.
NASA Astrophysics Data System (ADS)
Kang, Ziho
This dissertation is divided into four parts: 1) Development of effective methods for comparing visual scanning paths (or scanpaths) for a dynamic task of multiple moving targets, 2) application of the methods to compare the scanpaths of experts and novices for a conflict detection task of multiple aircraft on radar screen, 3) a post-hoc analysis of other eye movement characteristics of experts and novices, and 4) finding out whether the scanpaths of experts can be used to teach the novices. In order to compare experts' and novices' scanpaths, two methods are developed. The first proposed method is the matrix comparisons using the Mantel test. The second proposed method is the maximum transition-based agglomerative hierarchical clustering (MTAHC) where comparisons of multi-level visual groupings are held out. The matrix comparison method was useful for a small number of targets during the preliminary experiment, but turned out to be inapplicable to a realistic case when tens of aircraft were presented on screen; however, MTAHC was effective with large number of aircraft on screen. The experiments with experts and novices on the aircraft conflict detection task showed that their scanpaths are different. The MTAHC result was able to explicitly show how experts visually grouped multiple aircraft based on similar altitudes while novices tended to group them based on convergence. Also, the MTAHC results showed that novices paid much attention to the converging aircraft groups even if they are safely separated by altitude; therefore, less attention was given to the actual conflicting pairs resulting in low correct conflict detection rates. Since the analysis showed the scanpath differences, experts' scanpaths were shown to novices in order to find out its effectiveness. The scanpath treatment group showed indications that they changed their visual movements from trajectory-based to altitude-based movements. Between the treatment and the non-treatment group, there were no significant differences in terms of number of correct detections; however, the treatment group made significantly fewer false alarms.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
Wang, Kun; Fan, Daoqing; Liu, Yaqing; Wang, Erkang
2015-11-15
Simple, rapid, sensitive and specific detection of cancer cells is of great importance for early and accurate cancer diagnostics and therapy. By coupling nanotechnology and dual-aptamer target binding strategies, we developed a colorimetric assay for visually detecting cancer cells with high sensitivity and specificity. The nanotechnology including high catalytic activity of PtAuNP and magnetic separation & concentration plays a vital role on the signal amplification and improvement of detection sensitivity. The color change caused by small amount of target cancer cells (10 cells/mL) can be clearly distinguished by naked eyes. The dual-aptamer target binding strategy guarantees the detection specificity that large amount of non-cancer cells and different cancer cells (10(4) cells/mL) cannot cause obvious color change. A detection limit as low as 10 cells/mL with detection linear range from 10 to 10(5) cells/mL was reached according to the experimental detections in phosphate buffer solution as well as serum sample. The developed enzyme-free and cost effective colorimetric assay is simple and no need of instrument while still provides excellent sensitivity, specificity and repeatability, having potential application on point-of-care cancer diagnosis. Copyright © 2015 Elsevier B.V. All rights reserved.
Buchholz, Judy; Aimola Davies, Anne
2005-02-01
Performance on a covert visual attention task is compared between a group of adults with developmental dyslexia (specifically phonological difficulties) and a group of age and IQ matched controls. The group with dyslexia were generally slower to detect validly-cued targets. Costs of shifting attention toward the periphery when the target was invalidly cued were significantly higher for the group with dyslexia, while costs associated with shifts toward the fovea tended to be lower. Higher costs were also shown by the group with dyslexia for up-down shifts of attention in the periphery. A visual field processing difference was found, in that the group with dyslexia showed higher costs associated with shifting attention between objects in they LVF. These findings indicate that these adults with dyslexia have difficulty in both the space-based and the object-based components of covert visual attention, and more specifically to stimuli located in the periphery.
Bulf, Hermann; Macchi Cassia, Viola; de Hevia, Maria Dolores
2014-01-01
A number of studies have shown strong relations between numbers and oriented spatial codes. For example, perceiving numbers causes spatial shifts of attention depending upon numbers' magnitude, in a way suggestive of a spatially oriented, mental representation of numbers. Here, we investigated whether this phenomenon extends to non-symbolic numbers, as well as to the processing of the continuous dimensions of size and brightness, exploring whether different quantitative dimensions are equally mapped onto space. After a numerical (symbolic Arabic digits or non-symbolic arrays of dots; Experiment 1) or a non-numerical cue (shapes of different size or brightness level; Experiment 2) was presented, participants' saccadic response to a target that could appear either on the left or the right side of the screen was registered using an automated eye-tracker system. Experiment 1 showed that, both in the case of Arabic digits and dot arrays, right targets were detected faster when preceded by large numbers, and left targets were detected faster when preceded by small numbers. Participants in Experiment 2 were faster at detecting right targets when cued by large-sized shapes and left targets when cued by small-sized shapes, whereas brightness cues did not modulate the detection of peripheral targets. These findings indicate that looking at a symbolic or a non-symbolic number induces attentional shifts to a peripheral region of space that is congruent with the numbers' relative position on a mental number line, and that a similar shift in visual attention is induced by looking at shapes of different size. More specifically, results suggest that, while the dimensions of number and size spontaneously map onto an oriented space, the dimension of brightness seems to be independent at a certain level of magnitude elaboration from the dimensions of spatial extent and number, indicating that not all continuous dimensions are equally mapped onto space.
Associative and repetition priming with the repeated masked prime technique: no priming found.
Avons, S E; Russo, Riccardo; Cinel, Caterina; Verolini, Veronica; Glynn, Kevin; McDonald, Rebecca; Cameron, Marie
2009-01-01
Wentura and Frings (2005) reported evidence of subliminal categorical priming on a lexical decision task, using a new method of visual masking in which the prime string consisted of the prime word flanked by random consonants and random letter masks alternated with the prime string on successive refresh cycles. We investigated associative and repetition priming on lexical decision, using the same method of visual masking. Three experiments failed to show any evidence of associative priming, (1) when the prime string was fixed at 10 characters (three to six flanking letters) and (2) when the number of flanking letters were reduced or absent. In all cases, prime detection was at chance level. Strong associative priming was observed with visible unmasked primes, but the addition of flanking letters restricted priming even though prime detection was still high. With repetition priming, no priming effects were found with the repeated masked technique, and prime detection was poor but just above chance levels. We conclude that with repeated masked primes, there is effective visual masking but that associative priming and repetition priming do not occur with experiment-unique prime-target pairs. Explanations for this apparent discrepancy across priming paradigms are discussed. The priming stimuli and prime-target pairs used in this study may be downloaded as supplemental materials from mc.psychonomic-journals.org/content/supplemental.
Wang, Yi; Li, Hui; Wang, Yan; Zhang, Lu; Xu, Jianguo; Ye, Changyun
2017-01-01
The report describes a simple, rapid and sensitive assay for visual and multiplex detection of Enterococcus faecalis and Staphylococcus aureus based on multiple loop-mediated isothermal amplification (mLAMP) and lateral flow biosensor (LFB). Detection and differentiation of the Ef0027 gene (E. faecalis-specific gene) and nuc gene (S. aureus-specific gene) were determined using fluorescein (FITC)-and digoxin-modified primers in the mLAMP process. In the presence of biotin- and FITC-/digoxin-modified primers, the mLAMP yielded numerous biotin- and FITC-/digoxin-attached duplex products, which were detected by LFB through biotin/streptavidin interaction (biotin on the duplex and streptavidin on the gold nanoparticle) and immunoreactions (FITC/digoxin on the duplex and anti-FITC/digoxin on the LFB test line). The accumulation of gold nanoparticles generated a characteristic red line, enabling visual and multiplex detection of target pathogens without instrumentation. The limit of detection (LoD), analytical specificity and feasibility of LAMP-LFB technique were successfully examined in pure culture and blood samples. The entire procedure, including specimen (blood samples) processing (30 min), isothermal reaction (40 min) and result reporting (within 2 min), could be completed within 75 min. Thus, this assay offers a simple, rapid, sensitive and specific test for multiplex detection of E. faecalis and S. aureus strains. Furthermore, the LAMP-LFB strategy is a universal technique, which can be extended to detect various target sequences by re-designing the specific LAMP primers. PMID:28239371
Depuydt, Christophe E; Arbyn, Marc; Benoy, Ina H; Vandepitte, Johan; Vereecken, Annie J; Bogers, Johannes J
2009-01-01
The objective of this prospective study was to compare the number of CIN2+cases detected in negative cytology by different quality control (QC) methods. Full rescreening, high-risk (HR) human papillomavirus (HPV)-targeted reviewing and HR HPV detection were compared. Randomly selected negative cytology detected by BD FocalPoint™ (NFR), by guided screening of the prescreened which needed further review (GS) and by manual screening (MS) was used. A 3-year follow-up period was available. Full rescreening of cytology only detected 23.5% of CIN2+ cases, whereas the cytological rescreening of oncogenic positive slides (high-risk HPV-targeted reviewing) detected 7 of 17 CIN2+ cases (41.2%). Quantitative real-time PCR for 15 oncogenic HPV types detected all CIN2+ cases. Relative sensitivity to detect histological CIN2+ was 0.24 for full rescreening, 0.41 for HR-targeted reviewing and 1.00 for HR HPV detection. In more than half of the reviewed negative cytological preparations associated with histological CIN2+cases no morphologically abnormal cells were detected despite a positive HPV test. The visual cut-off for the detection of abnormal cytology was established at 6.5 HR HPV copies/cell. High-risk HPV detection has a higher yield for detection of CIN2+ cases as compared to manual screening followed by 5% full review, or compared to targeted reviewing of smears positive for oncogenic HPV types, and show diagnostic properties that support its use as a QC procedure in cytologic laboratories. PMID:18544049
Müller, Matthias M; Andersen, Søren K; Hindi Attar, Catherine
2011-11-02
A central controversy in the field of attention is how the brain deals with emotional distractors and to what extent they capture attentional processing resources reflexively due to their inherent significance for guidance of adaptive behavior and survival. Especially, the time course of competitive interactions in early visual areas and whether masking of briefly presented emotional stimuli can inhibit biasing of processing resources in these areas is currently unknown. We recorded frequency-tagged potentials evoked by a flickering target detection task in the foreground of briefly presented emotional or neutral pictures that were followed by a mask in human subjects. We observed greater competition for processing resources in early visual cortical areas with shortly presented emotional relative to neutral pictures ~275 ms after picture offset. This was paralleled by a reduction of target detection rates in trials with emotional pictures ~400 ms after picture offset. Our finding that briefly presented emotional distractors are able to bias attention well after their offset provides evidence for a rather slow feedback or reentrant neural competition mechanism for emotional distractors that continues after the offset of the emotional stimulus.
Lawton, Teri
2016-01-01
There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.
Early access to abstract representations in developing readers: evidence from masked priming.
Perea, Manuel; Mallouh, Reem Abu; Carreiras, Manuel
2013-07-01
A commonly shared assumption in the field of visual-word recognition is that retinotopic representations are rapidly converted into abstract representations. Here we examine the role of visual form vs. abstract representations during the early stages of word processing - as measured by masked priming - in young children (3rd and 6th Graders) and adult readers. To maximize the chances of detecting an effect of visual form, we employed a language with a very intricate orthography, Arabic. If visual form plays a role in the early stages of processing, greater benefit would be expected from related primes that have the same visual form (in terms of the ligation pattern between a word's letters) as the target word (e.g.- [ktz b-ktA b] - note that the three initial letters are connected in prime and target) than for those that do not (- [ktxb-ktA b]). Results showed that the magnitude of priming effect relative to an unrelated condition (e.g. -) was remarkably similar for both types of prime. Thus, despite the visual complexity of Arabic orthography, there is fast access to the abstract letter representations not only in adult readers by also in developing readers. © 2013 Blackwell Publishing Ltd.
Early access to abstract representations in developing readers: Evidence from masked priming
Perea, Manuel; Abu Mallouh, Reem; Carreiras, Manuel
2013-01-01
A commonly shared assumption in the field of visual-word recognition is that retinotopic representations are rapidly converted into abstract representations. Here we examine the role of visual form vs. abstract representations during the early stages of word processing –as measured by masked priming– in young children (3rd and 6th graders) and adult readers. To maximize the chances of detecting an effect of visual form, we employed a language with a very intricate orthography, Arabic. If visual form plays a role in the early moments of processing, greater benefit would be expected from related primes that have the same visual form (in terms of the ligation pattern between a word’s letters) as the target word (e.g., - [ktzb-ktAb] –note that the three initial letters are connected in prime and target) than for those that do not ( [ktxb-ktAb]). Results showed that the magnitude of priming effect relative to an unrelated condition (e.g., ) was remarkably similar for both types of primes. Thus, despite the visual complexity of Arabic orthography, there is fast access to the abstract letter representations not only in adult readers by also in developing readers. PMID:23786474
Detection, prevention, and rehabilitation of amblyopia.
Spiritus, M
1997-10-01
The necessity of visual preschool screening for reducing the prevalence of amblyopia is widely accepted. The beneficial results of large-scale screening programs conducted in Scandinavia are reported. Screening monocular visual acuity at 3.5 to 4 years of age appears to be an excellent basis for detecting and treating amblyopia and an acceptable compromise between the pitfalls encountered in screening younger children and the cost-to-benefit ratio. In this respect, several preschoolers' visual acuity charts have been evaluated. New recently developed small-target random stereotests and binocular suppression tests have also been developed with the aim of correcting the many false negatives (anisometropic amblyopia or bilateral high ametropia) induced by the usual stereotests. Longitudinal studies demonstrate that correction of high refractive errors decreases the risk of amblyopia and does not impede emmetropization. The validity of various photoscreening and videoscreening procedures for detecting refractive errors in infants prior to the onset of strabismus or amblyopia, as well as alternatives to conventional occlusion therapy, is discussed.
Interactive Tools for Measuring Visual Scanning Performance and Reaction Time.
Brooks, Johnell; Seeanner, Julia; Hennessy, Sarah; Manganelli, Joseph; Crisler, Matthew; Rosopa, Patrick; Jenkins, Casey; Anderson, Michael; Drouin, Nathalie; Belle, Leah; Truesdail, Constance; Tanner, Stephanie
Occupational therapists are constantly searching for engaging, high-technology interactive tasks that provide immediate feedback to evaluate and train clients with visual scanning deficits. This study examined the relationship between two tools: the VISION COACH™ interactive light board and the Functional Object Detection © (FOD) Advanced driving simulator scenario. Fifty-four healthy drivers, ages 21-66 yr, were divided into three age groups. Participants performed braking response and visual target (E) detection tasks of the FOD Advanced driving scenario, followed by two sets of three trials using the VISION COACH Full Field 60 task. Results showed no significant effect of age on FOD Advanced performance but a significant effect of age on VISION COACH performance. Correlations showed that participants' performance on both braking and E detection tasks were significantly positively correlated with performance on the VISION COACH (.37 < r < .40, p < .01). These tools provide new options for therapists. Copyright © 2017 by the American Occupational Therapy Association, Inc.
Attention to baseline: does orienting visuospatial attention really facilitate target detection?
Albares, Marion; Criaud, Marion; Wardak, Claire; Nguyen, Song Chi Trung; Ben Hamed, Suliann; Boulinguez, Philippe
2011-08-01
Standard protocols testing the orientation of visuospatial attention usually present spatial cues before targets and compare valid-cue trials with invalid-cue trials. The valid/invalid contrast results in a relative behavioral or physiological difference that is generally interpreted as a benefit of attention orientation. However, growing evidence suggests that inhibitory control of response is closely involved in this kind of protocol that requires the subjects to withhold automatic responses to cues, probably biasing behavioral and physiological baselines. Here, we used two experiments to disentangle the inhibitory control of automatic responses from orienting of visuospatial attention in a saccadic reaction time task in humans, a variant of the classical cue-target detection task and a sustained visuospatial attentional task. Surprisingly, when referring to a simple target detection task in which there is no need to refrain from reacting to avoid inappropriate responses, we found no consistent evidence of facilitation of target detection at the attended location. Instead, we observed a cost at the unattended location. Departing from the classical view, our results suggest that reaction time measures of visuospatial attention probably relie on the attenuation of elementary processes involved in visual target detection and saccade initiation away from the attended location rather than on facilitation at the attended location. This highlights the need to use proper control conditions in experimental designs to disambiguate relative from absolute cueing benefits on target detection reaction times, both in psychophysical and neurophysiological studies.
Wei, Xiaofeng; Tian, Tian; Jia, Shasha; Zhu, Zhi; Ma, Yanli; Sun, Jianjun; Lin, Zhenyu; Yang, Chaoyong James
2015-04-21
A versatile point-of-care assay platform was developed for simultaneous detection of multiple targets based on a microfluidic paper-based analytic device (μPAD) using a target-responsive hydrogel to mediate fluidic flow and signal readout. An aptamer-cross-linked hydrogel was used as a target-responsive flow regulator in the μPAD. In the absence of a target, the hydrogel is formed in the flow channel, stopping the flow in the μPAD and preventing the colored indicator from traveling to the final observation spot, thus yielding a "signal off" readout. In contrast, in the presence of a target, no hydrogel is formed because of the preferential interaction of target and aptamer. This allows free fluidic flow in the μPAD, carrying the indicator to the observation spot and producing a "signal on" readout. The device is inexpensive to fabricate, easy to use, and disposable after detection. Testing results can be obtained within 6 min by the naked eye via a simple loading operation without the need for any auxiliary equipment. Multiple targets, including cocaine, adenosine, and Pb(2+), can be detected simultaneously, even in complex biological matrices such as urine. The reported method offers simple, low cost, rapid, user-friendly, point-of-care testing, which will be useful in many applications.
Adaptability and specificity of inhibition processes in distractor-induced blindness.
Winther, Gesche N; Niedeggen, Michael
2017-12-01
In a rapid serial visual presentation task, inhibition processes cumulatively impair processing of a target possessing distractor properties. This phenomenon-known as distractor-induced blindness-has thus far only been elicited using dynamic visual features, such as motion and orientation changes. In three ERP experiments, we used a visual object feature-color-to test for the adaptability and specificity of the effect. In Experiment I, participants responded to a color change (target) in the periphery whose onset was signaled by a central cue. Presentation of irrelevant color changes prior to the cue (distractors) led to reduced target detection, accompanied by a frontal ERP negativity that increased with increasing number of distractors, similar to the effects previously found for dynamic targets. This suggests that distractor-induced blindness is adaptable to color features. In Experiment II, the target consisted of coherent motion contrasting the color distractors. Correlates of distractor-induced blindness were found neither in the behavioral nor in the ERP data, indicating a feature specificity of the process. Experiment III confirmed the strict distinction between congruent and incongruent distractors: A single color distractor was embedded in a stream of motion distractors with the target consisting of a coherent motion. While behavioral performance was affected by the distractors, the color distractor did not elicit a frontal negativity. The experiments show that distractor-induced blindness is also triggered by visual stimuli predominantly processed in the ventral stream. The strict specificity of the central inhibition process also applies to these stimulus features. © 2017 Society for Psychophysiological Research.
Distributed Attention Is Implemented through Theta-Rhythmic Gamma Modulation.
Landau, Ayelet Nina; Schreyer, Helene Marianne; van Pelt, Stan; Fries, Pascal
2015-08-31
When subjects monitor a single location, visual target detection depends on the pre-target phase of an ∼8 Hz brain rhythm. When multiple locations are monitored, performance decrements suggest a division of the 8 Hz rhythm over the number of locations, indicating that different locations are sequentially sampled. Indeed, when subjects monitor two locations, performance benefits alternate at a 4 Hz rhythm. These performance alternations were revealed after a reset of attention to one location. Although resets are common and important events for attention, it is unknown whether, in the absence of resets, ongoing attention samples stimuli in alternation. Here, we examined whether spatially specific attentional sampling can be revealed by ongoing pre-target brain rhythms. Visually induced gamma-band activity plays a role in spatial attention. Therefore, we hypothesized that performance on two simultaneously monitored stimuli can be predicted by a 4 Hz modulation of gamma-band activity. Brain rhythms were assessed with magnetoencephalography (MEG) while subjects monitored bilateral grating stimuli for a unilateral target event. The corresponding contralateral gamma-band responses were subtracted from each other to isolate spatially selective, target-related fluctuations. The resulting lateralized gamma-band activity (LGA) showed opposite pre-target 4 Hz phases for detected versus missed targets. The 4 Hz phase of pre-target LGA accounted for a 14.5% modulation in performance. These findings suggest that spatial attention is a theta-rhythmic sampling process that is continuously ongoing, with each sampling cycle being implemented through gamma-band synchrony. Copyright © 2015 Elsevier Ltd. All rights reserved.
'Where' and 'what' in visual search.
Atkinson, J; Braddick, O J
1989-01-01
A line segment target can be detected among distractors of a different orientation by a fast 'preattentive' process. One view is that this depends on detection of a 'feature gradient', which enables subjects to locate where the target is without necessarily identifying what it is. An alternative view is that a target can be identified as distinctive in a particular 'feature map' without subjects knowing where it is in that map. Experiments are reported in which briefly exposed arrays of line segments were followed by a pattern mask, and the threshold stimulus-mask interval determined for three tasks: 'what'--subjects reported whether the target was vertical or horizontal among oblique distractors; 'coarse where'--subjects reported whether the target was in the upper or lower half of the array; 'fine where'--subjects reported whether or not the target was in a set of four particular array positions. The threshold interval was significantly lower for the 'coarse where' than for the 'what' task, indicating that, even though localization in this task depends on the target's orientation difference, this localization is possible without absolute identification of target orientation. However, for the 'fine where' task, intervals as long as or longer than those for the 'what' task were required. It appears either that different localization processes work at different levels of resolution, or that a single localization process, independent of identification, can increase its resolution at the expense of processing speed. These possibilities are discussed in terms of distinct neural representations of the visual field and fixed or variable localization processes acting upon them.
Parietal substrates for dimensional effects in visual search: evidence from lesion-symptom mapping
Humphreys, Glyn W.; Chechlacz, Magdalena
2013-01-01
In visual search, the detection of pop-out targets is facilitated when the target-defining dimension remains the same compared with when it changes across trials. We tested the brain regions necessary for these dimensional carry-over effects using a voxel-based morphometry study with brain-lesioned patients. Participants had to search for targets defined by either their colour (red or blue) or orientation (right- or left-tilted), and the target dimension either stayed the same or changed on consecutive trials. Twenty-five patients were categorized according to whether they showed an effect of dimensional change on search or not. The two groups did not differ with regard to their performance on several working memory tasks, and the dimensional carry-over effects were not correlated with working memory performance. With spatial, sustained attention and working memory deficits as well as lesion volume controlled, damage within the right inferior parietal lobule (the angular and supramarginal gyri) extending into the intraparietal sulcus was associated with an absence of dimensional carry-over (P < 0.001, cluster-level corrected for multiple comparisons). The data suggest that these regions of parietal cortex are necessary to implement attention shifting in the context of visual dimensional change. PMID:23404335
Hay, Julia L; Milders, Maarten M; Sahraie, Arash; Niedeggen, Michael
2006-08-01
Recent visual marking studies have shown that the carry-over of distractor inhibition can impair the ability of singletons to capture attention if the singleton and distractors share features. The current study extends this finding to first-order motion targets and distractors, clearly separated in time by a visual cue (the letter X). Target motion discrimination was significantly impaired, a result attributed to the carry-over of distractor inhibition. Increasing the difficulty of cue detection increased the motion target impairment, as distractor inhibition is thought to increase under demanding (high load) conditions in order to maximize selection efficiency. The apparent conflict with studies reporting reduced distractor inhibition under high load conditions was resolved by distinguishing between the effects of "cognitive" and "perceptual" load. ((c) 2006 APA, all rights reserved).
Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng
2011-01-01
This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver’s visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible. PMID:22164117
Wang, Tao; Zheng, Nanning; Xin, Jingmin; Ma, Zheng
2011-01-01
This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver's visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible.
Low-resolution ship detection from high-altitude aerial images
NASA Astrophysics Data System (ADS)
Qi, Shengxiang; Wu, Jianmin; Zhou, Qing; Kang, Minyang
2018-02-01
Ship detection from optical images taken by high-altitude aircrafts such as unmanned long-endurance airships and unmanned aerial vehicles has broad applications in marine fishery management, ship monitoring and vessel salvage. However, the major challenge is the limited capability of information processing on unmanned high-altitude platforms. Furthermore, in order to guarantee the wide detection range, unmanned aircrafts generally cruise at high altitudes, resulting in imagery with low-resolution targets and strong clutters suffered by heavy clouds. In this paper, we propose a low-resolution ship detection method to extract ships from these high-altitude optical images. Inspired by a recent research on visual saliency detection indicating that small salient signals could be well detected by a gradient enhancement operation combined with Gaussian smoothing, we propose the facet kernel filtering to rapidly suppress cluttered backgrounds and delineate candidate target regions from the sea surface. Then, the principal component analysis (PCA) is used to compute the orientation of the target axis, followed by a simplified histogram of oriented gradient (HOG) descriptor to characterize the ship shape property. Finally, support vector machine (SVM) is applied to discriminate real targets and false alarms. Experimental results show that the proposed method actually has high efficiency in low-resolution ship detection.
Blind jealousy? Romantic insecurity increases emotion-induced failures of visual perception.
Most, Steven B; Laurenceau, Jean-Philippe; Graber, Elana; Belcher, Amber; Smith, C Veronica
2010-04-01
Does the influence of close relationships pervade so deeply as to impact visual awareness? Results from two experiments involving heterosexual romantic couples suggest that they do. Female partners from each couple performed a rapid detection task where negative emotional distractors typically disrupt visual awareness of subsequent targets; at the same time, their male partners rated attractiveness first of landscapes, then of photos of other women. At the end of both experiments, the degree to which female partners indicated uneasiness about their male partner looking at and rating other women correlated significantly with the degree to which negative emotional distractors had disrupted their target perception during that time. This relationship was robust even when controlling for individual differences in baseline performance. Thus, emotions elicited by social contexts appear to wield power even at the level of perceptual processing. Copyright 2010 APA, all rights reserved.
Spatial attention does not require preattentive grouping.
Vecera, S P; Behrmann, M
1997-01-01
Does spatial attention follow a full preattentive analysis of the visual field, or can attention select from ungrouped regions of the visual field? We addressed this question by testing an apperceptive agnosic patient, J. W., in tasks involving both spatial selection and preattentive grouping. Results suggest that J.W. had intact spatial attention: He was faster to detect targets appearing at cued location relative to targets appearing at uncued locations. However, his preattentive processes were severely disrupted. Gestalt grouping and symmetry perception, both thought to involve preattentive processes, were impaired in J. W. Also, he could not use gestalt grouping cues to guide spatial attention. These results suggest that spatial attention is not completely dependent on preattentive grouping processes. We argue that preattentive grouping processes and spatial attention may mutually constrain one another in guiding the attentional selection of visual stimuli but that these 2 processes are isolated from one another.
Behavior and neural basis of near-optimal visual search
Ma, Wei Ji; Navalpakkam, Vidhya; Beck, Jeffrey M; van den Berg, Ronald; Pouget, Alexandre
2013-01-01
The ability to search efficiently for a target in a cluttered environment is one of the most remarkable functions of the nervous system. This task is difficult under natural circumstances, as the reliability of sensory information can vary greatly across space and time and is typically a priori unknown to the observer. In contrast, visual-search experiments commonly use stimuli of equal and known reliability. In a target detection task, we randomly assigned high or low reliability to each item on a trial-by-trial basis. An optimal observer would weight the observations by their trial-to-trial reliability and combine them using a specific nonlinear integration rule. We found that humans were near-optimal, regardless of whether distractors were homogeneous or heterogeneous and whether reliability was manipulated through contrast or shape. We present a neural-network implementation of near-optimal visual search based on probabilistic population coding. The network matched human performance. PMID:21552276
Perceptual learning effect on decision and confidence thresholds.
Solovey, Guillermo; Shalom, Diego; Pérez-Schuster, Verónica; Sigman, Mariano
2016-10-01
Practice can enhance of perceptual sensitivity, a well-known phenomenon called perceptual learning. However, the effect of practice on subjective perception has received little attention. We approach this problem from a visual psychophysics and computational modeling perspective. In a sequence of visual search experiments, subjects significantly increased the ability to detect a "trained target". Before and after training, subjects performed two psychophysical protocols that parametrically vary the visibility of the "trained target": an attentional blink and a visual masking task. We found that confidence increased after learning only in the attentional blink task. Despite large differences in some observables and task settings, we identify common mechanisms for decision-making and confidence. Specifically, our behavioral results and computational model suggest that perceptual ability is independent of processing time, indicating that changes in early cortical representations are effective, and learning changes decision criteria to convey choice and confidence. Copyright © 2016 Elsevier Inc. All rights reserved.
Liu, Jun; Kang, Huaizhi; Donovan, Michael; Zhu, Zhi
2017-01-01
Hydrogels are water-retainable materials, made from cross-linked polymers, that can be tailored to applications in bioanalysis and biomedicine. As technology advances, an increasing number of molecules have been used as the components of hydrogel systems. However, the shortcomings of these systems have prompted researchers to find new materials that can be incorporated into them. Among all of these emerging materials, aptamers have recently attracted substantial attention because of their unique properties, for example biocompatibility, selective binding, and molecular recognition, all of which make them promising candidates for target-responsive hydrogel engineering. In this work, we will review how aptamers have been incorporated into hydrogel systems to enable colorimetric detection, controlled drug release, and targeted cancer therapy. PMID:22052153
Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation
Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina
2017-01-01
Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this “online” multisensory improvement, there is evidence of long-lasting, “offline” effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced “online” effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations. PMID:29326578
Audiovisual Rehabilitation in Hemianopia: A Model-Based Theoretical Investigation.
Magosso, Elisa; Cuppini, Cristiano; Bertini, Caterina
2017-01-01
Hemianopic patients exhibit visual detection improvement in the blind field when audiovisual stimuli are given in spatiotemporally coincidence. Beyond this "online" multisensory improvement, there is evidence of long-lasting, "offline" effects induced by audiovisual training: patients show improved visual detection and orientation after they were trained to detect and saccade toward visual targets given in spatiotemporal proximity with auditory stimuli. These effects are ascribed to the Superior Colliculus (SC), which is spared in these patients and plays a pivotal role in audiovisual integration and oculomotor behavior. Recently, we developed a neural network model of audiovisual cortico-collicular loops, including interconnected areas representing the retina, striate and extrastriate visual cortices, auditory cortex, and SC. The network simulated unilateral V1 lesion with possible spared tissue and reproduced "online" effects. Here, we extend the previous network to shed light on circuits, plastic mechanisms, and synaptic reorganization that can mediate the training effects and functionally implement visual rehabilitation. The network is enriched by the oculomotor SC-brainstem route, and Hebbian mechanisms of synaptic plasticity, and is used to test different training paradigms (audiovisual/visual stimulation in eye-movements/fixed-eyes condition) on simulated patients. Results predict different training effects and associate them to synaptic changes in specific circuits. Thanks to the SC multisensory enhancement, the audiovisual training is able to effectively strengthen the retina-SC route, which in turn can foster reinforcement of the SC-brainstem route (this occurs only in eye-movements condition) and reinforcement of the SC-extrastriate route (this occurs in presence of survived V1 tissue, regardless of eye condition). The retina-SC-brainstem circuit may mediate compensatory effects: the model assumes that reinforcement of this circuit can translate visual stimuli into short-latency saccades, possibly moving the stimuli into visual detection regions. The retina-SC-extrastriate circuit is related to restitutive effects: visual stimuli can directly elicit visual detection with no need for eye movements. Model predictions and assumptions are critically discussed in view of existing behavioral and neurophysiological data, forecasting that other oculomotor compensatory mechanisms, beyond short-latency saccades, are likely involved, and stimulating future experimental and theoretical investigations.
Target vessel detection by epicardial ultrasound in off-pump coronary bypass surgery.
Hayakawa, Masato; Asai, Tohru; Kinoshita, Takeshi; Suzuki, Tomoaki; Shiraishi, Shoichiro
2013-01-01
The detection of embedded coronary arteries is difficult especially in off-pump coronary bypass surgery. From June 2010, we introduced high-frequency epicardial ultrasound (ECUS) to assess and evaluate embedded arteries during off-pump coronary bypass surgery. Between June 2010 and June 2011, a total of 89 consecutive patients underwent isolated coronary bypass surgery at our institution. The patients consisted of 72 men and 17 women with a mean age of 67.9 years. We routinely use the VeriQC system (MediStim, Oslo, Norway) to detect the target vessels in the operation. The patients were assigned to one of two groups, depending on whether ECUS was used in the operation (n = 10, ECUS group) or not (n = 79, non-ECUS group). We analyzed the impact of introducing the ECUS in terms of operative outcome. All patients underwent revascularization using the off-pump technique without emergent conversion to cardiopulmonary bypass during surgery. The total number of distal anastomoses was 299, and 12 target vessels could not be identified either visually or on palpation. Thus, the frequency of the embedded coronary arteries was 4.01% (12/299 cases). The preoperative profiles of the two groups were not significantly different. Operation time was significantly longer in the ECUS group (P = 0.02). There were no significant differences in postoperative outcome between the two groups. In the present study, in which the target coronary arteries could not be detected either visually or on palpation in 12 (4.01%) of 299 cases, the use of high-frequency ECUS allowed all patients to undergo off-pump coronary bypass surgery without conversion to cardiopulmonary bypass during the operation. High-frequency ECUS is therefore useful in off-pump coronary bypass surgery.
Bottom-up guidance in visual search for conjunctions.
Proulx, Michael J
2007-02-01
Understanding the relative role of top-down and bottom-up guidance is crucial for models of visual search. Previous studies have addressed the role of top-down and bottom-up processes in search for a conjunction of features but with inconsistent results. Here, the author used an attentional capture method to address the role of top-down and bottom-up processes in conjunction search. The role of bottom-up processing was assayed by inclusion of an irrelevant-size singleton in a search for a conjunction of color and orientation. One object was uniquely larger on each trial, with chance probability of coinciding with the target; thus, the irrelevant feature of size was not predictive of the target's location. Participants searched more efficiently for the target when it was also the size singleton, and they searched less efficiently for the target when a nontarget was the size singleton. Although a conjunction target cannot be detected on the basis of bottom-up processing alone, participants used search strategies that relied significantly on bottom-up guidance in finding the target, resulting in interference from the irrelevant-size singleton.
Maeda, Hiroshi; Kokeguchi, Susumu; Fujimoto, Chiyo; Tanimoto, Ichiro; Yoshizumi, Wakako; Nishimura, Fusanori; Takashiba, Shogo
2005-02-01
A method for nucleic acid amplification, loop-mediated isothermal amplification (LAMP) was employed to develop a rapid and simple detection system for periodontal pathogen, Porphyromonas gingivalis. A set of six primers was designed by targeting the 16S ribosomal RNA gene. By the detection system, target DNA was amplified and visualized on agarose gel within 30 min under isothermal condition at 64 degrees C with a detection limit of 20 cells of P. gingivalis. Without gel electrophoresis, the LAMP amplicon was directly visualized in the reaction tube by addition of SYBR Green I for a naked-eye inspection. The LAMP reaction was also assessed by white turbidity of magnesium pyrophosphate (a by-product of LAMP) in the tube. Detection limits of these naked-eye inspections were 20 cells and 200 cells, respectively. Although false-positive DNA amplification was observed from more than 10(7) cells of Porphyromonas endodontalis, no amplification was observed in other five related oral pathogens. Further, quantitative detection of P. gingivalis was accomplished by a real-time monitoring of the LAMP reaction using SYBR Green I with linearity over a range of 10(2)-10(6) cells. The real-time LAMP was then applied to clinical samples of dental plaque and demonstrated almost identical results to the conventional real-time PCR with an advantage of rapidity. These findings indicate the potential usefulness of LAMP for detecting and quantifying P. gingivalis, especially in its rapidity and simplicity.
Improved detection probability of low level light and infrared image fusion system
NASA Astrophysics Data System (ADS)
Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang
2018-02-01
Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.
NASA Astrophysics Data System (ADS)
Schmidt, Hannes; Seki, David; Woebken, Dagmar; Eickhorst, Thilo
2017-04-01
Fluorescence in situ hybridization (FISH) is routinely used for the phylogenetic identification, detection, and quantification of single microbial cells environmental microbiology. Oligonucleotide probes that match the 16S rRNA sequence of target organisms are generally applied and the resulting signals are visualized via fluorescence microscopy. Consequently, the detection of the microbial cells of interest is limited by the resolution and the sensitivity of light microscopy where objects smaller than 0.2 µm can hardly be represented. Visualizing microbial cells at magnifications beyond light microscopy, however, can provide information on the composition and potential complexity of microbial habitats - the actual sites of nutrient cycling in soil and sediments. We present a recently developed technique that combines (1) the phylogenetic identification and detection of individual microorganisms by epifluorescence microscopy, with (2) the in situ localization of gold-labelled target cells on an ultrastructural level by SEM. Based on 16S rRNA targeted in situ hybridization combined with catalyzed reporter deposition, a streptavidin conjugate labeled with a fluorescent dye and nanogold particles is introduced into whole microbial cells. A two-step visualization process including an autometallographic enhancement of nanogold particles then allows for either fluorescence or electron microscopy, or a correlative application thereof. We will present applications of the Gold-FISH protocol to samples of marine sediments, agricultural soils, and plant roots. The detection and enumeration of bacterial cells in soil and sediment samples was comparable to CARD-FISH applications via fluorescence microscopy. Examples of microbe-surface interaction analysis will be presented on the basis of bacteria colonizing the rhizoplane of rice roots. In principle, Gold-FISH can be performed on any material to give a snapshot of microbe-surface interactions and provides a promising tool for the acquisition of correlative information on microorganisms within their respective habitats.
The effect of increased monitoring load on vigilance performance using a simulated radar display.
DOT National Transportation Integrated Search
1977-07-01
The present study examined the extent to which level of target density influences the ability to sustain attention to a complex monitoring task requiring only a detection response to simple stimulus change. The visual display was designed to approxim...
Comparative psychophysics of bumblebee and honeybee colour discrimination and object detection.
Dyer, Adrian G; Spaethe, Johannes; Prack, Sabina
2008-07-01
Bumblebee (Bombus terrestris) discrimination of targets with broadband reflectance spectra was tested using simultaneous viewing conditions, enabling an accurate determination of the perceptual limit of colour discrimination excluding confounds from memory coding (experiment 1). The level of colour discrimination in bumblebees, and honeybees (Apis mellifera) (based upon previous observations), exceeds predictions of models considering receptor noise in the honeybee. Bumblebee and honeybee photoreceptors are similar in spectral shape and spacing, but bumblebees exhibit significantly poorer colour discrimination in behavioural tests, suggesting possible differences in spatial or temporal signal processing. Detection of stimuli in a Y-maze was evaluated for bumblebees (experiment 2) and honeybees (experiment 3). Honeybees detected stimuli containing both green-receptor-contrast and colour contrast at a visual angle of approximately 5 degrees , whilst stimuli that contained only colour contrast were only detected at a visual angle of 15 degrees . Bumblebees were able to detect these stimuli at a visual angle of 2.3 degrees and 2.7 degrees , respectively. A comparison of the experiments suggests a tradeoff between colour discrimination and colour detection in these two species, limited by the need to pool colour signals to overcome receptor noise. We discuss the colour processing differences and possible adaptations to specific ecological habitats.
Temporal Dynamics of Visual Attention Measured with Event-Related Potentials
Kashiwase, Yoshiyuki; Matsumiya, Kazumichi; Kuriki, Ichiro; Shioiri, Satoshi
2013-01-01
How attentional modulation on brain activities determines behavioral performance has been one of the most important issues in cognitive neuroscience. This issue has been addressed by comparing the temporal relationship between attentional modulations on neural activities and behavior. Our previous study measured the time course of attention with amplitude and phase coherence of steady-state visual evoked potential (SSVEP) and found that the modulation latency of phase coherence rather than that of amplitude was consistent with the latency of behavioral performance. In this study, as a complementary report, we compared the time course of visual attention shift measured by event-related potentials (ERPs) with that by target detection task. We developed a novel technique to compare ERPs with behavioral results and analyzed the EEG data in our previous study. Two sets of flickering stimulus at different frequencies were presented in the left and right visual hemifields, and a target or distracter pattern was presented randomly at various moments after an attention-cue presentation. The observers were asked to detect targets on the attended stimulus after the cue. We found that two ERP components, P300 and N2pc, were elicited by the target presented at the attended location. Time-course analyses revealed that attentional modulation of the P300 and N2pc amplitudes increased gradually until reaching a maximum and lasted at least 1.5 s after the cue onset, which is similar to the temporal dynamics of behavioral performance. However, attentional modulation of these ERP components started later than that of behavioral performance. Rather, the time course of attentional modulation of behavioral performance was more closely associated with that of the concurrently recorded SSVEPs analyzed. These results suggest that neural activities reflected not by either the P300 or N2pc, but by the SSVEPs, are the source of attentional modulation of behavioral performance. PMID:23976966
Color vision but not visual attention is altered in migraine.
Shepherd, Alex J
2006-04-01
To examine visual search performance in migraine and headache-free control groups and to determine whether reports of selective color vision deficits in migraine occur preattentively. Visual search is a classic technique to measure certain components of visual attention. The technique can be manipulated to measure both preattentive (automatic) and attentive processes. Here, visual search for colored targets was employed to extend earlier reports that the detection or discrimination of colors selective for the short-wavelength sensitive cone photoreceptors in the retina (S or "blue" cones) is impaired in migraine. Visual search performance for small and large color differences was measured in 34 migraine and 34 control participants. Small and large color differences were included to assess attentive and preattentive processing, respectively. In separate conditions, colored stimuli were chosen that would be detected selectively by either the S-, or by the long- (L or "red") and middle (M or "green")-wavelength sensitive cone photoreceptors. The results showed no preattentive differences between the migraine and control groups. For active, or attentive, search, differences between the migraine and control groups occurred for colors detected by the S-cones only, there were no differences for colors detected by the L- and M-cones. The migraine group responded significantly more slowly than the control group for the S-cone colors. The pattern of results indicates that there are no overall differences in search performance between migraine and control groups. The differences found for the S-cone colors are attributed to impaired discrimination of these colors in migraine and not to differences in attention.
Psychophysical and perceptual performance in a simulated-scotoma model of human eye injury
NASA Astrophysics Data System (ADS)
Brandeis, R.; Egoz, I.; Peri, D.; Sapiens, N.; Turetz, J.
2008-02-01
Macular scotomas, affecting visual functioning, characterize many eye and neurological diseases like AMD, diabetes mellitus, multiple sclerosis, and macular hole. In this work, foveal visual field defects were modeled, and their effects were evaluated on spatial contrast sensitivity and a task of stimulus detection and aiming. The modeled occluding scotomas, of different size, were superimposed on the stimuli presented on the computer display, and were stabilized on the retina using a mono Purkinje Eye-Tracker. Spatial contrast sensitivity was evaluated using square-wave grating stimuli, whose contrast thresholds were measured using the method of constant stimuli with "catch trials". The detection task consisted of a triple conjunctive visual search display of: size (in visual angle), contrast and background (simple, low-level features vs. complex, high-level features). Search/aiming accuracy as well as R.T. measures used for performance evaluation. Artificially generated scotomas suppressed spatial contrast sensitivity in a size dependent manner, similar to previous studies. Deprivation effect was dependent on spatial frequency, consistent with retinal inhomogeneity models. Stimulus detection time was slowed in complex background search situation more than in simple background. Detection speed was dependent on scotoma size and size of stimulus. In contrast, visually guided aiming was more sensitive to scotoma effect in simple background search situation than in complex background. Both stimulus aiming R.T. and accuracy (precision targeting) were impaired, as a function of scotoma size and size of stimulus. The data can be explained by models distinguishing between saliency-based, parallel and serial search processes, guiding visual attention, which are supported by underlying retinal as well as neural mechanisms.
Long-term adaptation to change in implicit contextual learning.
Zellin, Martina; von Mühlenen, Adrian; Müller, Hermann J; Conci, Markus
2014-08-01
The visual world consists of spatial regularities that are acquired through experience in order to guide attentional orienting. For instance, in visual search, detection of a target is faster when a layout of nontarget items is encountered repeatedly, suggesting that learned contextual associations can guide attention (contextual cuing). However, scene layouts sometimes change, requiring observers to adapt previous memory representations. Here, we investigated the long-term dynamics of contextual adaptation after a permanent change of the target location. We observed fast and reliable learning of initial context-target associations after just three repetitions. However, adaptation of acquired contextual representations to relocated targets was slow and effortful, requiring 3 days of training with overall 80 repetitions. A final test 1 week later revealed equivalent effects of contextual cuing for both target locations, and these were comparable to the effects observed on day 1. That is, observers learned both initial target locations and relocated targets, given extensive training combined with extended periods of consolidation. Thus, while implicit contextual learning efficiently extracts statistical regularities of our environment at first, it is rather insensitive to change in the longer term, especially when subtle changes in context-target associations need to be acquired.
Spectroscopic Imaging of Deep Tissue through Photoacoustic Detection of Molecular Vibration
Wang, Pu; Rajian, Justin R.; Cheng, Ji-Xin
2013-01-01
The quantized vibration of chemical bonds provides a way of imaging target molecules in a complex tissue environment. Photoacoustic detection of harmonic vibrational transitions provides an approach to visualize tissue content beyond the ballistic photon regime. This method involves pulsed laser excitation of overtone transitions in target molecules inside a tissue. Fast relaxation of the vibrational energy into heat results in a local temperature rise on the order of mK and a subsequent generation of acoustic waves detectable with an ultrasonic transducer. In this perspective, we review recent advances that demonstrate the advantages of vibration-based photoacoustic imaging and illustrate its potential in diagnosing cardiovascular plaques. An outlook into future development of vibrational photoacoustic endoscopy and tomography is provided. PMID:24073304
Analysis of passive acoustic ranging of helicopters from the joint acoustic propagation experiment
NASA Technical Reports Server (NTRS)
Carnes, Benny L.; Morgan, John C.
1993-01-01
For more than twenty years, personnel of the U.S.A.E. Waterways Experiment Station (WES) have been performing research dealing with the application of sensors for detection of military targets. The WES research has included the use of seismic, acoustic, magnetic, and other sensors to detect, track, and classify military ground targets. Most of the WES research has been oriented toward the employment of such sensors in a passive mode. Techniques for passive detection are of particular interest in the Army because of the advantages over active detection. Passive detection methods are not susceptible to interception, detection, jamming, or location of the source by the threat. A decided advantage for using acoustic and seismic sensors for detection in tactical situations is the non-line-of-sight capability; i.e., detection of low flying helicopters at long distances without visual contact. This study was conducted to analyze the passive acoustic ranging (PAR) concept using a more extensive data set from the Joint Acoustic Propagation Experiment (JAPE).
Irwin, David E.; Robinson, Maria M.
2015-01-01
Retinal image displacements caused by saccadic eye movements are generally unnoticed. Recent theories have proposed that perceptual stability across saccades depends on a local evaluation process centered on the saccade target object rather than on remapping and evaluating the positions of all objects in a display. In three experiments, we examined whether objects other than the saccade target also influence perceptual stability by measuring displacement detection thresholds across saccades for saccade targets and a variable number of non-saccade objects. We found that the positions of multiple objects are maintained across saccades, but with variable precision, with the saccade target object having priority in the perception of displacement, most likely because it is the focus of attention before the saccade and resides near the fovea after the saccade. The perception of displacement of objects that are not the saccade target is affected by acuity limitations, attentional limitations, and limitations on memory capacity. Unlike previous studies that have found that a postsaccadic blank improves the detection of displacement direction across saccades, we found that postsaccadic blanking hurt the detection of displacement per se by increasing false alarms. Overall, our results are consistent with the hypothesis that visual working memory underlies the perception of stability across saccades. PMID:26640430
Tumor detection and elimination by a targeted gallium corrole
Agadjanian, Hasmik; Ma, Jun; Rentsendorj, Altan; Valluripalli, Vinod; Hwang, Jae Youn; Mahammed, Atif; Farkas, Daniel L.; Gray, Harry B.; Gross, Zeev; Medina-Kauwe, Lali K.
2009-01-01
Sulfonated gallium(III) corroles are intensely fluorescent macrocyclic compounds that spontaneously assemble with carrier proteins to undergo cell entry. We report in vivo imaging and therapeutic efficacy of a tumor-targeted corrole noncovalently assembled with a heregulin-modified protein directed at the human epidermal growth factor receptor (HER). Systemic delivery of this protein-corrole complex results in tumor accumulation, which can be visualized in vivo owing to intensely red corrole fluorescence. Targeted delivery in vivo leads to tumor cell death while normal tissue is spared. These findings contrast with the effects of doxorubicin, which can elicit cardiac damage during therapy and required direct intratumoral injection to yield similar levels of tumor shrinkage compared with the systemically delivered corrole. The targeted complex ablated tumors at >5 times a lower dose than untargeted systemic doxorubicin, and the corrole did not damage heart tissue. Complexes remained intact in serum and the carrier protein elicited no detectable immunogenicity. The sulfonated gallium(III) corrole functions both for tumor detection and intervention with safety and targeting advantages over standard chemotherapeutic agents. PMID:19342490
Ghosh, Debadyuti; Bagley, Alexander F.; Na, Young Jeong; Birrer, Michael J.; Bhatia, Sangeeta N.; Belcher, Angela M.
2014-01-01
Highly sensitive detection of small, deep tumors for early diagnosis and surgical interventions remains a challenge for conventional imaging modalities. Second-window near-infrared light (NIR2, 950–1,400 nm) is promising for in vivo fluorescence imaging due to deep tissue penetration and low tissue autofluorescence. With their intrinsic fluorescence in the NIR2 regime and lack of photobleaching, single-walled carbon nanotubes (SWNTs) are potentially attractive contrast agents to detect tumors. Here, targeted M13 virus-stabilized SWNTs are used to visualize deep, disseminated tumors in vivo. This targeted nanoprobe, which uses M13 to stably display both tumor-targeting peptides and an SWNT imaging probe, demonstrates excellent tumor-to-background uptake and exhibits higher signal-to-noise performance compared with visible and near-infrared (NIR1) dyes for delineating tumor nodules. Detection and excision of tumors by a gynecological surgeon improved with SWNT image guidance and led to the identification of submillimeter tumors. Collectively, these findings demonstrate the promise of targeted SWNT nanoprobes for noninvasive disease monitoring and guided surgery. PMID:25214538
Ghosh, Debadyuti; Bagley, Alexander F; Na, Young Jeong; Birrer, Michael J; Bhatia, Sangeeta N; Belcher, Angela M
2014-09-23
Highly sensitive detection of small, deep tumors for early diagnosis and surgical interventions remains a challenge for conventional imaging modalities. Second-window near-infrared light (NIR2, 950-1,400 nm) is promising for in vivo fluorescence imaging due to deep tissue penetration and low tissue autofluorescence. With their intrinsic fluorescence in the NIR2 regime and lack of photobleaching, single-walled carbon nanotubes (SWNTs) are potentially attractive contrast agents to detect tumors. Here, targeted M13 virus-stabilized SWNTs are used to visualize deep, disseminated tumors in vivo. This targeted nanoprobe, which uses M13 to stably display both tumor-targeting peptides and an SWNT imaging probe, demonstrates excellent tumor-to-background uptake and exhibits higher signal-to-noise performance compared with visible and near-infrared (NIR1) dyes for delineating tumor nodules. Detection and excision of tumors by a gynecological surgeon improved with SWNT image guidance and led to the identification of submillimeter tumors. Collectively, these findings demonstrate the promise of targeted SWNT nanoprobes for noninvasive disease monitoring and guided surgery.
Zelinsky, Gregory J; Peng, Yifan; Berg, Alexander C; Samaras, Dimitris
2013-10-08
Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery.
Zelinsky, Gregory J.; Peng, Yifan; Berg, Alexander C.; Samaras, Dimitris
2013-01-01
Search is commonly described as a repeating cycle of guidance to target-like objects, followed by the recognition of these objects as targets or distractors. Are these indeed separate processes using different visual features? We addressed this question by comparing observer behavior to that of support vector machine (SVM) models trained on guidance and recognition tasks. Observers searched for a categorically defined teddy bear target in four-object arrays. Target-absent trials consisted of random category distractors rated in their visual similarity to teddy bears. Guidance, quantified as first-fixated objects during search, was strongest for targets, followed by target-similar, medium-similarity, and target-dissimilar distractors. False positive errors to first-fixated distractors also decreased with increasing dissimilarity to the target category. To model guidance, nine teddy bear detectors, using features ranging in biological plausibility, were trained on unblurred bears then tested on blurred versions of the same objects appearing in each search display. Guidance estimates were based on target probabilities obtained from these detectors. To model recognition, nine bear/nonbear classifiers, trained and tested on unblurred objects, were used to classify the object that would be fixated first (based on the detector estimates) as a teddy bear or a distractor. Patterns of categorical guidance and recognition accuracy were modeled almost perfectly by an HMAX model in combination with a color histogram feature. We conclude that guidance and recognition in the context of search are not separate processes mediated by different features, and that what the literature knows as guidance is really recognition performed on blurred objects viewed in the visual periphery. PMID:24105460
The effect of visual context on manual localization of remembered targets
NASA Technical Reports Server (NTRS)
Barry, S. R.; Bloomberg, J. J.; Huebner, W. P.
1997-01-01
This paper examines the contribution of egocentric cues and visual context to manual localization of remembered targets. Subjects pointed in the dark to the remembered position of a target previously viewed without or within a structured visual scene. Without a remembered visual context, subjects pointed to within 2 degrees of the target. The presence of a visual context with cues of straight ahead enhanced pointing performance to the remembered location of central but not off-center targets. Thus, visual context provides strong visual cues of target position and the relationship of body position to target location. Without a visual context, egocentric cues provide sufficient input for accurate pointing to remembered targets.
Lucas, Nadia; Vuilleumier, Patrik
2008-04-01
In normal observers, visual search is facilitated for targets with salient attributes. We compared how two different types of cue (expression and colour) may influence search for face targets, in healthy subjects (n=27) and right brain-damaged patients with left spatial neglect (n=13). The target faces were defined by their identity (singleton among a crowd of neutral faces) but could either be neutral (like other faces), or have a different emotional expression (fearful or happy), or a different colour (red-tinted). Healthy subjects were the fastest for detecting the colour-cued targets, but also showed a significant facilitation for emotionally cued targets, relative to neutral faces differing from other distracter faces by identity only. Healthy subjects were also faster overall for target faces located on the left, as compared to the right side of the display. In contrast, neglect patients were slower to detect targets on the left (contralesional) relative to the right (ipsilesional) side. However, they showed the same pattern of cueing effects as healthy subjects on both sides of space; while their best performance was also found for faces cued by colour, they showed a significant advantage for faces cued by expression, relative to the neutral condition. These results indicate that despite impaired attention towards the left hemispace, neglect patients may still show an intact influence of both low-level colour cues and emotional expression cues on attention, suggesting that neural mechanisms responsible for these effects are partly separate from fronto-parietal brain systems controlling spatial attention during search.
Kumar, Parameet; Nath, Kapili; Rath, Bimba; Sen, Manas K; Vishalakshi, Potharuju; Chauhan, Devender S; Katoch, Vishwa M; Singh, Sarman; Tyagi, Sanjay; Sreenivas, Vishnubhatla; Prasad, Hanumanthappa K
2009-09-01
A real-time polymerase chain reaction (PCR) assay for the direct identification of Mycobacterium tuberculosis and M. bovis using molecular beacons was developed. The assay was modified for use in regular thermal cyclers. Molecular beacons that were specific for M. tuberculosis (Tb-B) and M. bovis (Bo-B) were designed. The fluorescence of the target PCR product-molecular beacon probe complex was detected visually using a transilluminator. The results were then compared with those of conventional multiplex PCR (CM-PCR) assays and biochemical identification. The detection limit of Tb-B and Bo-B beacons was 500 fg and 50 fg by the visual format and real-time PCR assay, respectively, compared with 5 pg by CM-PCR assay. Pulmonary and extrapulmonary samples were examined. The agreement between culture and the two assays was very good in sputum samples and fair in extrapulmonary samples. The agreement between clinical diagnoses with the two assays was moderate in extrapulmonary samples. There was very good agreement between CM-PCR and visual format assays for all samples used in the study. Concordance in the identification of isolates by the visual, CM-PCR assay, and biochemical identification was seen. Hence, the use of molecular beacon detection of M. tuberculosis and M. bovis in clinical samples is feasible by setting up two asymmetric PCRs concurrently. The assay is sensitive, specific, simple to interpret, and takes less than 3 hours to complete.
Kumar, Parameet; Nath, Kapili; Rath, Bimba; Sen, Manas K.; Vishalakshi, Potharuju; Chauhan, Devender S.; Katoch, Vishwa M.; Singh, Sarman; Tyagi, Sanjay; Sreenivas, Vishnubhatla; Prasad, Hanumanthappa K.
2009-01-01
A real-time polymerase chain reaction (PCR) assay for the direct identification of Mycobacterium tuberculosis and M. bovis using molecular beacons was developed. The assay was modified for use in regular thermal cyclers. Molecular beacons that were specific for M. tuberculosis (Tb-B) and M. bovis (Bo-B) were designed. The fluorescence of the target PCR product-molecular beacon probe complex was detected visually using a transilluminator. The results were then compared with those of conventional multiplex PCR (CM-PCR) assays and biochemical identification. The detection limit of Tb-B and Bo-B beacons was 500 fg and 50 fg by the visual format and real-time PCR assay, respectively, compared with 5 pg by CM-PCR assay. Pulmonary and extrapulmonary samples were examined. The agreement between culture and the two assays was very good in sputum samples and fair in extrapulmonary samples. The agreement between clinical diagnoses with the two assays was moderate in extrapulmonary samples. There was very good agreement between CM-PCR and visual format assays for all samples used in the study. Concordance in the identification of isolates by the visual, CM-PCR assay, and biochemical identification was seen. Hence, the use of molecular beacon detection of M. tuberculosis and M. bovis in clinical samples is feasible by setting up two asymmetric PCRs concurrently. The assay is sensitive, specific, simple to interpret, and takes less than 3 hours to complete. PMID:19661384
Akiva-Kabiri, Lilach; Linkovski, Omer; Gertner, Limor; Henik, Avishai
2014-08-01
In musical-space synesthesia, musical pitches are perceived as having a spatially defined array. Previous studies showed that symbolic inducers (e.g., numbers, months) can modulate response according to the inducer's relative position on the synesthetic spatial form. In the current study we tested two musical-space synesthetes and a group of matched controls on three different tasks: musical-space mapping, spatial cue detection and a spatial Stroop-like task. In the free mapping task, both synesthetes exhibited a diagonal organization of musical pitch tones rising from bottom left to the top right. This organization was found to be consistent over time. In the subsequent tasks, synesthetes were asked to ignore an auditory or visually presented musical pitch (irrelevant information) and respond to a visual target (i.e., an asterisk) on the screen (relevant information). Compatibility between musical pitch and the target's spatial location was manipulated to be compatible or incompatible with the synesthetes' spatial representations. In the spatial cue detection task participants had to press the space key immediately upon detecting the target. In the Stroop-like task, they had to reach the target by using a mouse cursor. In both tasks, synesthetes' performance was modulated by the compatibility between irrelevant and relevant spatial information. Specifically, the target's spatial location conflicted with the spatial information triggered by the irrelevant musical stimulus. These results reveal that for musical-space synesthetes, musical information automatically orients attention according to their specific spatial musical-forms. The present study demonstrates the genuineness of musical-space synesthesia by revealing its two hallmarks-automaticity and consistency. In addition, our results challenge previous findings regarding an implicit vertical representation for pitch tones in non-synesthete musicians. Copyright © 2014 Elsevier Inc. All rights reserved.
Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro
2016-10-01
Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Object detection in natural scenes: Independent effects of spatial and category-based attention.
Stein, Timo; Peelen, Marius V
2017-04-01
Humans are remarkably efficient in detecting highly familiar object categories in natural scenes, with evidence suggesting that such object detection can be performed in the (near) absence of attention. Here we systematically explored the influences of both spatial attention and category-based attention on the accuracy of object detection in natural scenes. Manipulating both types of attention additionally allowed for addressing how these factors interact: whether the requirement for spatial attention depends on the extent to which observers are prepared to detect a specific object category-that is, on category-based attention. The results showed that the detection of targets from one category (animals or vehicles) was better than the detection of targets from two categories (animals and vehicles), demonstrating the beneficial effect of category-based attention. This effect did not depend on the semantic congruency of the target object and the background scene, indicating that observers attended to visual features diagnostic of the foreground target objects from the cued category. Importantly, in three experiments the detection of objects in scenes presented in the periphery was significantly impaired when observers simultaneously performed an attentionally demanding task at fixation, showing that spatial attention affects natural scene perception. In all experiments, the effects of category-based attention and spatial attention on object detection performance were additive rather than interactive. Finally, neither spatial nor category-based attention influenced metacognitive ability for object detection performance. These findings demonstrate that efficient object detection in natural scenes is independently facilitated by spatial and category-based attention.
Objective Methods to Test Visual Dysfunction in the Presence of Cognitive Impairment
2015-12-01
the eye and 3) purposeful eye movements to track targets that are resolved. Major Findings: Three major objective tests of vision were successfully...developed and optimized to detect disease. These were 1) the pupil light reflex (either comparing the two eyes or independently evaluating each eye ...separately for retina or optic nerve damage, 2) eye movement based analysis of target acquisition, fixation, and eccentric viewing as a means of
Comparison of cap lamp and laser illumination for detecting visual escape cues in smoke
Lutz, T.J.; Sammarco, J.J.; Srednicki, J.R.; Gallagher, S.
2015-01-01
The Illuminating Engineering Society of North America reports that an underground mine is the most difficult environment to illuminate (Rea, 2000). Researchers at the U.S. National Institute for Occupational Safety and Health (NIOSH) Office of Mine Safety and Health Research (OMSHR) are conducting ongoing studies designed to explore different lighting technologies for improving mine safety. Underground miners use different visual cues to escape from a smoke-filled environment. Primary and secondary escapeways are marked with reflective ceiling tags of various colors. Miners also look for mine rail tracks. The main objective of this paper is to compare different lighting types and ceiling tag colors to differentiate what works best in a smoke-filled environment. Various cap lamps (LED and incandescent) and lasers (red, blue, green) were compared to see which options resulted in the longest detection distances for red, green and blue reflective markers and a section of mine rail track. All targets advanced toward the human subject inside of a smoke-filled room to simulate the subject walking in a mine environment. Detection distances were recorded and analyzed to find the best cap lamp, laser color and target color in a smoke environment. Results show that cap lamp, laser color and target color do make a difference in detection distances and are perceived differently based on subject age. Cap lamps were superior to lasers in all circumstances of ceiling tag detection, with the exception of the green laser. The incandescent cap lamp worked best in the simulated smoke compared to the LED cap lamps. The green laser was the best color for detecting the tags and track compared to the red and blue lasers. The green tags were the easiest color to detect on the ceiling. On average, the track was easier for the subjects to detect than the ceiling tags. PMID:26236146
Comparison of cap lamp and laser illumination for detecting visual escape cues in smoke.
Lutz, T J; Sammarco, J J; Srednicki, J R; Gallagher, S
The Illuminating Engineering Society of North America reports that an underground mine is the most difficult environment to illuminate (Rea, 2000). Researchers at the U.S. National Institute for Occupational Safety and Health (NIOSH) Office of Mine Safety and Health Research (OMSHR) are conducting ongoing studies designed to explore different lighting technologies for improving mine safety. Underground miners use different visual cues to escape from a smoke-filled environment. Primary and secondary escapeways are marked with reflective ceiling tags of various colors. Miners also look for mine rail tracks. The main objective of this paper is to compare different lighting types and ceiling tag colors to differentiate what works best in a smoke-filled environment. Various cap lamps (LED and incandescent) and lasers (red, blue, green) were compared to see which options resulted in the longest detection distances for red, green and blue reflective markers and a section of mine rail track. All targets advanced toward the human subject inside of a smoke-filled room to simulate the subject walking in a mine environment. Detection distances were recorded and analyzed to find the best cap lamp, laser color and target color in a smoke environment. Results show that cap lamp, laser color and target color do make a difference in detection distances and are perceived differently based on subject age. Cap lamps were superior to lasers in all circumstances of ceiling tag detection, with the exception of the green laser. The incandescent cap lamp worked best in the simulated smoke compared to the LED cap lamps. The green laser was the best color for detecting the tags and track compared to the red and blue lasers. The green tags were the easiest color to detect on the ceiling. On average, the track was easier for the subjects to detect than the ceiling tags.
Identifying a "default" visual search mode with operant conditioning.
Kawahara, Jun-ichiro
2010-09-01
The presence of a singleton in a task-irrelevant domain can impair visual search. This impairment, known as the attentional capture depends on the set of participants. When narrowly searching for a specific feature (the feature search mode), only matching stimuli capture attention. When searching broadly (the singleton detection mode), any oddball captures attention. The present study examined which strategy represents the "default" mode using an operant conditioning approach in which participants were trained, in the absence of explicit instructions, to search for a target in an ambiguous context in which one of two modes was available. The results revealed that participants behaviorally adopted the singleton detection as the default mode but reported using the feature search mode. Conscious strategies did not eliminate capture. These results challenge the view that a conscious set always modulates capture, suggesting that the visual system tends to rely on stimulus salience to deploy attention.
Joint Attention Enhances Visual Working Memory
ERIC Educational Resources Information Center
Gregory, Samantha E. A.; Jackson, Margaret C.
2017-01-01
Joint attention--the mutual focus of 2 individuals on an item--speeds detection and discrimination of target information. However, what happens to that information beyond the initial perceptual episode? To fully comprehend and engage with our immediate environment also requires working memory (WM), which integrates information from second to…
DOSE RESPONSE DEETERMINATION OF NMDA ANTAGONISTS AND GABA AGONIST ON SUSTAINED ATTENTION.
We have shown that acute inhalation of toluene impairs sustained attention as assessed with a visual signal detection task (SDT). In vitro studies indicate that the NMDA and GABA systems are primary targets of anesthetic agents and organic solvents such as toluene. Pharmacologica...
Mechanism for Visual Detection of Small Targets in Insects
2013-06-14
of natural images statistics in biological motion estimation. Lect Notes Comput Sc, 1811, 492–501 Egelhaaf M, Borst A (1985) Are there separate ON...Modulating selective attention in an insect neuron, 30th Annual Meeting of the Australiasian Neuroscience Society, Melbourne, February. 3-6 23
Modeling Human Visual Perception for Target Detection in Military Simulations
2009-06-01
incorrectly, is a subject for future research. Possibly, one could exploit the Recognition-by-Components theory of Biederman (1987) and decompose the...Psychophysiscs, 55, 485-496. Biederman , I. (1987). Recognition-by-components: A theory of human image understand- ing. Psychological Review, 94, 115-147
Supèr, Hans; van der Togt, Chris; Spekreijse, Henk; Lamme, Victor A. F.
2004-01-01
We continuously scan the visual world via rapid or saccadic eye movements. Such eye movements are guided by visual information, and thus the oculomotor structures that determine when and where to look need visual information to control the eye movements. To know whether visual areas contain activity that may contribute to the control of eye movements, we recorded neural responses in the visual cortex of monkeys engaged in a delayed figure-ground detection task and analyzed the activity during the period of oculomotor preparation. We show that ≈100 ms before the onset of visually and memory-guided saccades neural activity in V1 becomes stronger where the strongest presaccadic responses are found at the location of the saccade target. In addition, in memory-guided saccades the strength of presaccadic activity shows a correlation with the onset of the saccade. These findings indicate that the primary visual cortex contains saccade-related responses and participates in visually guided oculomotor behavior. PMID:14970334
Altered prefrontal function with aging: insights into age-associated performance decline.
Solbakk, Anne-Kristin; Fuhrmann Alpert, Galit; Furst, Ansgar J; Hale, Laura A; Oga, Tatsuhide; Chetty, Sundari; Pickard, Natasha; Knight, Robert T
2008-09-26
We examined the effects of aging on visuo-spatial attention. Participants performed a bi-field visual selective attention task consisting of infrequent target and task-irrelevant novel stimuli randomly embedded among repeated standards in either attended or unattended visual fields. Blood oxygenation level dependent (BOLD) responses to the different classes of stimuli were measured using functional magnetic resonance imaging. The older group had slower reaction times to targets, and committed more false alarms but had comparable detection accuracy to young controls. Attended target and novel stimuli activated comparable widely distributed attention networks, including anterior and posterior association cortex, in both groups. The older group had reduced spatial extent of activation in several regions, including prefrontal, basal ganglia, and visual processing areas. In particular, the anterior cingulate and superior frontal gyrus showed more restricted activation in older compared with young adults across all attentional conditions and stimulus categories. The spatial extent of activations correlated with task performance in both age groups, but the regional pattern of association between hemodynamic responses and behavior differed between the groups. Whereas the young subjects relied on posterior regions, the older subjects engaged frontal areas. The results indicate that aging alters the functioning of neural networks subserving visual attention, and that these changes are related to cognitive performance.
Non-isotopic Method for In Situ LncRNA Visualization and Quantitation.
Maqsodi, Botoul; Nikoloff, Corina
2016-01-01
In mammals and other eukaryotes, most of the genome is transcribed in a developmentally regulated manner to produce large numbers of long noncoding RNAs (lncRNAs). Genome-wide studies have identified thousands of lncRNAs lacking protein-coding capacity. RNA in situ hybridization technique is especially beneficial for the visualization of RNA (mRNA and lncRNA) expression in a heterogeneous population of cells/tissues; however its utility has been hampered by complicated procedures typically developed and optimized for the detection of a specific gene and therefore not amenable to a wide variety of genes and tissues.Recently, bDNA has revolutionized RNA in situ detection with fully optimized, robust assays for the detection of any mRNA and lncRNA targets in formalin-fixed paraffin-embedded (FFPE) and fresh frozen tissue sections using manual processing.
Liu, Zhanmin; Yao, Chenhui; Yang, Cuiyun; Wang, Yanming; Wan, Sibao; Huang, Junyi
2018-05-16
Listeria monocytogenes is an important foodborne pathogen, and it can cause severe diseases. Rapid detection of L. monocytogenes is crucial to control this pathogen. A simple and robust strategy based on the cascade of PCR and G-quadruplex DNAzyme catalyzed reaction was used to detect L. monocytogenes. In the presence of hemin and the aptamer formed during PCR, the catalytic horseradish peroxidase-mimicking G-quadruplex DNAzymes allow the colorimetric responses of target DNA from L. monocytogenes. This assay can detect genomic DNA of L. monocytogenes specifically with as low as 50 pg/reaction with the naked eye. Through 20 pork samples assay, visual detection assay had the same results as conventional detection methods, and had a good performance. This is a powerful demonstration of the ability of G-quadruplex DNAzyme to be used for PCR-based assay with significant advantages of high sensitivity, low cost and simple manipulation over existing approaches and offers the opportunity for application in pathogen detection. Copyright © 2018 Elsevier Inc. All rights reserved.
Connexin 43-targeted T1 contrast agent for MRI diagnosis of glioma.
Abakumova, Tatiana; Abakumov, Maxim; Shein, Sergey; Chelushkin, Pavel; Bychkov, Dmitry; Mukhin, Vladimir; Yusubalieva, Gaukhar; Grinenko, Nadezhda; Kabanov, Alexander; Nukolova, Natalia; Chekhonin, Vladimir
2016-01-01
Glioblastoma multiforme is the most aggressive form of brain tumor. Early and accurate diagnosis of glioma and its borders is an important step for its successful treatment. One of the promising targets for selective visualization of glioma and its margins is connexin 43 (Cx43), which is highly expressed in reactive astrocytes and migrating glioma cells. The purpose of this study was to synthesize a Gd-based contrast agent conjugated with specific antibodies to Cx43 for efficient visualization of glioma C6 in vivo. We have prepared stable nontoxic conjugates of monoclonal antibody to Cx43 and polylysine-DTPA ligands complexed with Gd(III), which are characterized by higher T1 relaxivity (6.5 mM(-1) s(-1) at 7 T) than the commercial agent Magnevist® (3.4 mM(-1) s(-1)). Cellular uptake of Cx43-specific T1 contrast agent in glioma C6 cells was more than four times higher than the nonspecific IgG-contrast agent, as detected by flow cytometry and confocal analysis. MRI experiments showed that the obtained agents could markedly enhance visualization of glioma C6 in vivo after their intravenous administration. Significant accumulation of Cx43-targeted contrast agents in glioma and the peritumoral zone led not only to enhanced contrast but also to improved detection of the tumor periphery. Fluorescence imaging confirmed notable accumulation of Cx43-specific conjugates in the peritumoral zone compared with nonspecific IgG conjugates at 24 h after intravenous injection. All these features of Cx43-targeted contrast agents might be useful for more precise diagnosis of glioma and its borders by MRI. Copyright © 2015 John Wiley & Sons, Ltd.
Visual Attention Measures Predict Pedestrian Detection in Central Field Loss: A Pilot Study
Alberti, Concetta F.; Horowitz, Todd; Bronstad, P. Matthew; Bowers, Alex R.
2014-01-01
Purpose The ability of visually impaired people to deploy attention effectively to maximize use of their residual vision in dynamic situations is fundamental to safe mobility. We conducted a pilot study to evaluate whether tests of dynamic attention (multiple object tracking; MOT) and static attention (Useful Field of View; UFOV) were predictive of the ability of people with central field loss (CFL) to detect pedestrian hazards in simulated driving. Methods 11 people with bilateral CFL (visual acuity 20/30-20/200) and 11 age-similar normally-sighted drivers participated. Dynamic and static attention were evaluated with brief, computer-based MOT and UFOV tasks, respectively. Dependent variables were the log speed threshold for 60% correct identification of targets (MOT) and the increase in the presentation duration for 75% correct identification of a central target when a concurrent peripheral task was added (UFOV divided and selective attention subtests). Participants drove in a simulator and pressed the horn whenever they detected pedestrians that walked or ran toward the road. The dependent variable was the proportion of timely reactions (could have stopped in time to avoid a collision). Results UFOV and MOT performance of CFL participants was poorer than that of controls, and the proportion of timely reactions was also lower (worse) (84% and 97%, respectively; p = 0.001). For CFL participants, higher proportions of timely reactions correlated significantly with higher (better) MOT speed thresholds (r = 0.73, p = 0.01), with better performance on the UFOV divided and selective attention subtests (r = −0.66 and −0.62, respectively, p<0.04), with better contrast sensitivity scores (r = 0.54, p = 0.08) and smaller scotomas (r = −0.60, p = 0.05). Conclusions Our results suggest that brief laboratory-based tests of visual attention may provide useful measures of functional visual ability of individuals with CFL relevant to more complex mobility tasks. PMID:24558495
Visual attention measures predict pedestrian detection in central field loss: a pilot study.
Alberti, Concetta F; Horowitz, Todd; Bronstad, P Matthew; Bowers, Alex R
2014-01-01
The ability of visually impaired people to deploy attention effectively to maximize use of their residual vision in dynamic situations is fundamental to safe mobility. We conducted a pilot study to evaluate whether tests of dynamic attention (multiple object tracking; MOT) and static attention (Useful Field of View; UFOV) were predictive of the ability of people with central field loss (CFL) to detect pedestrian hazards in simulated driving. 11 people with bilateral CFL (visual acuity 20/30-20/200) and 11 age-similar normally-sighted drivers participated. Dynamic and static attention were evaluated with brief, computer-based MOT and UFOV tasks, respectively. Dependent variables were the log speed threshold for 60% correct identification of targets (MOT) and the increase in the presentation duration for 75% correct identification of a central target when a concurrent peripheral task was added (UFOV divided and selective attention subtests). Participants drove in a simulator and pressed the horn whenever they detected pedestrians that walked or ran toward the road. The dependent variable was the proportion of timely reactions (could have stopped in time to avoid a collision). UFOV and MOT performance of CFL participants was poorer than that of controls, and the proportion of timely reactions was also lower (worse) (84% and 97%, respectively; p = 0.001). For CFL participants, higher proportions of timely reactions correlated significantly with higher (better) MOT speed thresholds (r = 0.73, p = 0.01), with better performance on the UFOV divided and selective attention subtests (r = -0.66 and -0.62, respectively, p<0.04), with better contrast sensitivity scores (r = 0.54, p = 0.08) and smaller scotomas (r = -0.60, p = 0.05). Our results suggest that brief laboratory-based tests of visual attention may provide useful measures of functional visual ability of individuals with CFL relevant to more complex mobility tasks.
Constrained sampling experiments reveal principles of detection in natural scenes.
Sebastian, Stephen; Abrams, Jared; Geisler, Wilson S
2017-07-11
A fundamental everyday visual task is to detect target objects within a background scene. Using relatively simple stimuli, vision science has identified several major factors that affect detection thresholds, including the luminance of the background, the contrast of the background, the spatial similarity of the background to the target, and uncertainty due to random variations in the properties of the background and in the amplitude of the target. Here we use an experimental approach based on constrained sampling from multidimensional histograms of natural stimuli, together with a theoretical analysis based on signal detection theory, to discover how these factors affect detection in natural scenes. We sorted a large collection of natural image backgrounds into multidimensional histograms, where each bin corresponds to a particular luminance, contrast, and similarity. Detection thresholds were measured for a subset of bins spanning the space, where a natural background was randomly sampled from a bin on each trial. In low-uncertainty conditions, both the background bin and the amplitude of the target were fixed, and, in high-uncertainty conditions, they varied randomly on each trial. We found that thresholds increase approximately linearly along all three dimensions and that detection accuracy is unaffected by background bin and target amplitude uncertainty. The results are predicted from first principles by a normalized matched-template detector, where the dynamic normalizing gain factor follows directly from the statistical properties of the natural backgrounds. The results provide an explanation for classic laws of psychophysics and their underlying neural mechanisms.
Hirai, Masahiro; Muramatsu, Yukako; Mizuno, Seiji; Kurahashi, Naoko; Kurahashi, Hirokazu; Nakamura, Miho
2017-01-01
Individuals with Williams syndrome (WS) exhibit an atypical social phenotype termed hypersociability. One theory accounting for hypersociability presumes an atypical function of the amygdala, which processes fear-related information. However, evidence is lacking regarding the detection mechanisms of fearful faces for individuals with WS. Here, we introduce a visual search paradigm to elucidate the mechanisms for detecting fearful faces by evaluating the search asymmetry; the reaction time when both the target and distractors were swapped was asymmetrical. Eye movements reflect subtle atypical attentional properties, whereas, manual responses are unable to capture atypical attentional profiles toward faces in individuals with WS. Therefore, we measured both eye movements and manual responses of individuals with WS and typically developed children and adults in visual searching for a fearful face among neutral faces or a neutral face among fearful faces. Two task measures, namely reaction time and performance accuracy, were analyzed for each stimulus as well as gaze behavior and the initial fixation onset latency. Overall, reaction times in the WS group and the mentally age-matched control group were significantly longer than those in the chronologically age-matched group. We observed a search asymmetry effect in all groups: when a neutral target facial expression was presented among fearful faces, the reaction times were significantly prolonged in comparison with when a fearful target facial expression was displayed among neutral distractor faces. Furthermore, the first fixation onset latency of eye movement toward a target facial expression showed a similar tendency for manual responses. Although overall responses in detecting fearful faces for individuals with WS are slower than those for control groups, search asymmetry was observed. Therefore, cognitive mechanisms underlying the detection of fearful faces seem to be typical in individuals with WS. This finding is discussed with reference to the amygdala account explaining hypersociability in individuals with WS.
de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier
2016-11-21
Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lawton, Teri
2016-01-01
There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:27551263
Optoporation of impermeable molecules and genes for visualization and activation of cells
NASA Astrophysics Data System (ADS)
Dhakal, Kamal; Batbyal, Subrata; Kim, Young-Tae; Mohanty, Samarendra
2015-03-01
Visualization, activation, and detection of the cell(s) and their electrical activity require delivery of exogenous impermeable molecules and targeted expression of genes encoding labeling proteins, ion-channels and voltage indicators. While genes can be delivered by viral vector to cells, delivery of other impermeable molecules into the cytoplasm of targeted cells requires microinjection by mechanical needle or microelectrodes, which pose significant challenge to the viability of the cells. Further, it will be useful to localize the expression of the targeted molecules not only in specific cell types, but to specific cells in restricted spatial regions. Here, we report use of focused near-infrared (NIR) femtosecond laser beam to transiently perforate targeted cell membrane to insert genes encoding blue light activatable channelrhodopsin-2 (ChR2) and red-shifted opsin (ReachR). Optoporation of nanomolar concentrations of rhodamine phalloidin (an impermeable dye molecule for staining filamentous actin) into targeted living mammalian cells (both HEK and primary cortical neurons) is also achieved allowing imaging of dynamics and intact morphology of cellular structures without requiring fixation.
Peripheral prism glasses: effects of moving and stationary backgrounds.
Shen, Jieming; Peli, Eli; Bowers, Alex R
2015-04-01
Unilateral peripheral prisms for homonymous hemianopia (HH) expand the visual field through peripheral binocular visual confusion, a stimulus for binocular rivalry that could lead to reduced predominance and partial suppression of the prism image, thereby limiting device functionality. Using natural-scene images and motion videos, we evaluated whether detection was reduced in binocular compared with monocular viewing. Detection rates of nine participants with HH or quadranopia and normal binocularity wearing peripheral prisms were determined for static checkerboard perimetry targets briefly presented in the prism expansion area and the seeing hemifield. Perimetry was conducted under monocular and binocular viewing with targets presented over videos of real-world driving scenes and still frame images derived from those videos. With unilateral prisms, detection rates in the prism expansion area were significantly lower in binocular than in monocular (prism eye) viewing on the motion background (medians, 13 and 58%, respectively, p = 0.008) but not the still frame background (medians, 63 and 68%, p = 0.123). When the stimulus for binocular rivalry was reduced by fitting prisms bilaterally in one HH and one normally sighted subject with simulated HH, prism-area detection rates on the motion background were not significantly different (p > 0.6) in binocular and monocular viewing. Conflicting binocular motion appears to be a stimulus for reduced predominance of the prism image in binocular viewing when using unilateral peripheral prisms. However, the effect was only found for relatively small targets. Further testing is needed to determine the extent to which this phenomenon might affect the functionality of unilateral peripheral prisms in more real-world situations.
Peripheral Prism Glasses: Effects of Moving and Stationary Backgrounds
Shen, Jieming; Peli, Eli; Bowers, Alex R.
2015-01-01
Purpose Unilateral peripheral prisms for homonymous hemianopia (HH) expand the visual field through peripheral binocular visual confusion, a stimulus for binocular rivalry that could lead to reduced predominance (partial local suppression) of the prism image and limit device functionality. Using natural-scene images and motion videos, we evaluated whether detection was reduced in binocular compared to monocular viewing. Methods Detection rates of nine participants with HH or quadranopia and normal binocularity wearing peripheral prisms were determined for static checkerboard perimetry targets briefly presented in the prism expansion area and the seeing hemifield. Perimetry was conducted under monocular and binocular viewing with targets presented over videos of real-world driving scenes and still frame images derived from those videos. Results With unilateral prisms, detection rates in the prism expansion area were significantly lower in binocular than monocular (prism eye) viewing on the motion background (medians 13% and 58%, respectively, p = 0.008), but not the still frame background (63% and 68%, p = 0.123). When the stimulus for binocular rivalry was reduced by fitting prisms bilaterally in 1 HH and 1 normally-sighted subject with simulated HH, prism-area detection rates on the motion background were not significantly different (p > 0.6) in binocular and monocular viewing. Conclusions Conflicting binocular motion appears to be a stimulus for reduced predominance of the prism image in binocular viewing when using unilateral peripheral prisms. However, the effect was only found for relatively small targets. Further testing is needed to determine the extent to which this phenomenon might affect the functionality of unilateral peripheral prisms in more real-world situations. PMID:25785533
Pailian, Hrag; Halberda, Justin
2015-04-01
We investigated the psychometric properties of the one-shot change detection task for estimating visual working memory (VWM) storage capacity-and also introduced and tested an alternative flicker change detection task for estimating these limits. In three experiments, we found that the one-shot whole-display task returns estimates of VWM storage capacity (K) that are unreliable across set sizes-suggesting that the whole-display task is measuring different things at different set sizes. In two additional experiments, we found that the one-shot single-probe variant shows improvements in the reliability and consistency of K estimates. In another additional experiment, we found that a one-shot whole-display-with-click task (requiring target localization) also showed improvements in reliability and consistency. The latter results suggest that the one-shot task can return reliable and consistent estimates of VWM storage capacity (K), and they highlight the possibility that the requirement to localize the changed target is what engenders this enhancement. Through a final series of four experiments, we introduced and tested an alternative flicker change detection method that also requires the observer to localize the changing target and that generates, from response times, an estimate of VWM storage capacity (K). We found that estimates of K from the flicker task correlated with estimates from the traditional one-shot task and also had high reliability and consistency. We highlight the flicker method's ability to estimate executive functions as well as VWM storage capacity, and discuss the potential for measuring multiple abilities with the one-shot and flicker tasks.
Binocular Vision-Based Position and Pose of Hand Detection and Tracking in Space
NASA Astrophysics Data System (ADS)
Jun, Chen; Wenjun, Hou; Qing, Sheng
After the study of image segmentation, CamShift target tracking algorithm and stereo vision model of space, an improved algorithm based of Frames Difference and a new space point positioning model were proposed, a binocular visual motion tracking system was constructed to verify the improved algorithm and the new model. The problem of the spatial location and pose of the hand detection and tracking have been solved.
Exciton-controlled fluorescence: application to hybridization-sensitive fluorescent DNA probe.
Okamoto, Akimitsu; Ikeda, Shuji; Kubota, Takeshi; Yuki, Mizue; Yanagisawa, Hiroyuki
2009-01-01
A hybridization-sensitive fluorescent probe has been designed for nucleic acid detection, using the concept of fluorescence quenching caused by the intramolecular excitonic interaction of fluorescence dyes. We synthesized a doubly thiazole orange-labeled nucleotide showing high fluorescence intensity for a hybrid with the target nucleic acid and effective quenching for the single-stranded state. This exciton-controlled fluorescent probe was applied to living HeLa cells using microinjection to visualize intracellular mRNA localization. Immediately after injection of the probe into the cell, fluorescence was observed from the probe hybridizing with the target RNA. This fluorescence rapidly decreased upon addition of a competitor DNA. Multicoloring of this probe resulted in the simple simultaneous detection of plural target nucleic acid sequences. This probe realized a large, rapid, reversible change in fluorescence intensity in sensitive response to the amount of target nucleic acid, and facilitated spatiotemporal monitoring of the behavior of intracellular RNA.
Koslucher, Frank; Wade, Michael G; Nelson, Brent; Lim, Kelvin; Chen, Fu-Chen; Stoffregen, Thomas A
2012-07-01
Research has shown that the Nintendo Wii Balance Board (WBB) can reliably detect the quantitative kinematics of the center of pressure in stance. Previous studies used relatively coarse manipulations (1- vs. 2-leg stance, and eyes open vs. closed). We sought to determine whether the WBB could reliably detect postural changes associated with subtle variations in visual tasks. Healthy elderly adults stood on a WBB while performing one of two visual tasks. In the Inspection task, they maintained their gaze within the boundaries of a featureless target. In the Search task, they counted the occurrence of designated target letters within a block of text. Consistent with previous studies using traditional force plates, the positional variability of the center of pressure was reduced during performance of the Search task, relative to movement during performance of the Inspection task. Using detrended fluctuation analysis, a measure of movement dynamics, we found that COP trajectories were more predictable during performance of the Search task than during performance of the Inspection task. The results indicate that the WBB is sensitive to subtle variations in both the magnitude and dynamics of body sway that are related to variations in visual tasks engaged in during stance. The WBB is an inexpensive, reliable technology that can be used to evaluate subtle characteristics of body sway in large or widely dispersed samples. Copyright © 2012 Elsevier B.V. All rights reserved.
Gersch, Timothy M.; Schnitzer, Brian S.; Dosher, Barbara A.; Kowler, Eileen
2012-01-01
Saccadic eye movements and perceptual attention work in a coordinated fashion to allow selection of the objects, features or regions with the greatest momentary need for limited visual processing resources. This study investigates perceptual characteristics of pre-saccadic shifts of attention during a sequence of saccades using the visual manipulations employed to study mechanisms of attention during maintained fixation. The first part of this paper reviews studies of the connections between saccades and attention, and their significance for both saccadic control and perception. The second part presents three experiments that examine the effects of pre-saccadic shifts of attention on vision during sequences of saccades. Perceptual enhancements at the saccadic goal location relative to non-goal locations were found across a range of stimulus contrasts, with either perceptual discrimination or detection tasks, with either single or multiple perceptual targets, and regardless of the presence of external noise. The results show that the preparation of saccades can evoke a variety of attentional effects, including attentionally-mediated changes in the strength of perceptual representations, selection of targets for encoding in visual memory, exclusion of external noise, or changes in the levels of internal visual noise. The visual changes evoked by saccadic planning make it possible for the visual system to effectively use saccadic eye movements to explore the visual environment. PMID:22809798
Wood, Joanne M; Owsley, Cynthia
2014-01-01
The useful field of view test was developed to reflect the visual difficulties that older adults experience with everyday tasks. Importantly, the useful field of view test (UFOV) is one of the most extensively researched and promising predictor tests for a range of driving outcomes measures, including driving ability and crash risk as well as other everyday tasks. Currently available commercial versions of the test can be administered using personal computers; these measure the speed of visual processing for rapid detection and localization of targets under conditions of divided visual attention and in the presence and absence of visual clutter. The test is believed to assess higher-order cognitive abilities, but performance also relies on visual sensory function because in order for targets to be attended to, they must be visible. The format of the UFOV has been modified over the years; the original version estimated the spatial extent of useful field of view, while the latest version measures visual processing speed. While deficits in the useful field of view are associated with functional impairments in everyday activities in older adults, there is also emerging evidence from several research groups that improvements in visual processing speed can be achieved through training. These improvements have been shown to reduce crash risk, and can have a positive impact on health and functional well-being, with the potential to increase the mobility and hence the independence of older adults. © 2014 S. Karger AG, Basel
Hansen, Adam G.; Beauchamp, David A.
2014-01-01
Most predators eat only a subset of possible prey. However, studies evaluating diet selection rarely measure prey availability in a manner that accounts for temporal–spatial overlap with predators, the sensory mechanisms employed to detect prey, and constraints on prey capture.We evaluated the diet selection of cutthroat trout (Oncorhynchus clarkii) feeding on a diverse planktivore assemblage in Lake Washington to test the hypothesis that the diet selection of piscivores would reflect random (opportunistic) as opposed to non-random (targeted) feeding, after accounting for predator–prey overlap, visual detection and capture constraints.Diets of cutthroat trout were sampled in autumn 2005, when the abundance of transparent, age-0 longfin smelt (Spirinchus thaleichthys) was low, and 2006, when the abundance of smelt was nearly seven times higher. Diet selection was evaluated separately using depth-integrated and depth-specific (accounted for predator–prey overlap) prey abundance. The abundance of different prey was then adjusted for differences in detectability and vulnerability to predation to see whether these factors could explain diet selection.In 2005, cutthroat trout fed non-randomly by selecting against the smaller, transparent age-0 longfin smelt, but for the larger age-1 longfin smelt. After adjusting prey abundance for visual detection and capture, cutthroat trout fed randomly. In 2006, depth-integrated and depth-specific abundance explained the diets of cutthroat trout well, indicating random feeding. Feeding became non-random after adjusting for visual detection and capture. Cutthroat trout selected strongly for age-0 longfin smelt, but against similar sized threespine stickleback (Gasterosteus aculeatus) and larger age-1 longfin smelt in 2006. Overlap with juvenile sockeye salmon (O. nerka) was minimal in both years, and sockeye salmon were rare in the diets of cutthroat trout.The direction of the shift between random and non-random selection depended on the presence of a weak versus a strong year class of age-0 longfin smelt. These fish were easy to catch, but hard to see. When their density was low, poor detection could explain their rarity in the diet. When their density was high, poor detection was compensated by higher encounter rates with cutthroat trout, sufficient to elicit a targeted feeding response. The nature of the feeding selectivity of a predator can be highly dependent on fluctuations in the abundance and suitability of key prey.
Cordray, Michael S; Richards-Kortum, Rebecca R
2015-11-26
Isothermal amplification techniques are emerging as a promising method for malaria diagnosis since they are capable of detecting extremely low concentrations of parasite target while mitigating the need for infrastructure and training required by other nucleic acid based tests. Recombinase polymerase amplification (RPA) is promising for further development since it operates in a short time frame (<30 min) and produces a product that can be visually detected on a lateral flow dipstick. A self-sealing paper and plastic system that performs both the amplification and detection of a malaria DNA sequence is presented. Primers were designed using the NCBI nBLAST tools and screened using gel electrophoresis. Paper and plastic devices were prototyped using commercial design software and parts were cut using a laser cutter and assembled by hand. Synthetic copies of the Plasmodium 18S gene were spiked into solution and used as targets for the RPA reaction. To test the performance of the device the same samples spiked with synthetic target were run in parallel both in the paper and plastic devices and using conventional bench top methods. Novel RPA primers were developed that bind to sequences present in the four species of Plasmodium which infect humans. The paper and plastic devices were found to be capable of detecting as few as 5 copies/µL of synthetic Plasmodium DNA (50 copies total), comparable to the same reaction run on the bench top. The devices produce visual results in an hour, cost approximately $1, and are self-contained once the device is sealed. The device was capable of carrying out the RPA reaction and detecting meaningful amounts of synthetic Plasmodium DNA in a self-sealing and self-contained device. This device may be a step towards making nucleic acid tests more accessible for malaria detection.
Incorporation of operator knowledge for improved HMDS GPR classification
NASA Astrophysics Data System (ADS)
Kennedy, Levi; McClelland, Jessee R.; Walters, Joshua R.
2012-06-01
The Husky Mine Detection System (HMDS) detects and alerts operators to potential threats observed in groundpenetrating RADAR (GPR) data. In the current system architecture, the classifiers have been trained using available data from multiple training sites. Changes in target types, clutter types, and operational conditions may result in statistical differences between the training data and the testing data for the underlying features used by the classifier, potentially resulting in an increased false alarm rate or a lower probability of detection for the system. In the current mode of operation, the automated detection system alerts the human operator when a target-like object is detected. The operator then uses data visualization software, contextual information, and human intuition to decide whether the alarm presented is an actual target or a false alarm. When the statistics of the training data and the testing data are mismatched, the automated detection system can overwhelm the analyst with an excessive number of false alarms. This is evident in the performance of and the data collected from deployed systems. This work demonstrates that analyst feedback can be successfully used to re-train a classifier to account for variable testing data statistics not originally captured in the initial training data.
NASA Astrophysics Data System (ADS)
Shokri, Ehsan; Hosseini, Morteza; Davari, Mehdi D.; Ganjali, Mohammad R.; Peppelenbosch, Maikel P.; Rezaee, Farhad
2017-04-01
A modified non-cross-linking gold-nanoparticles (Au-NPs) aggregation strategy has been developed for the label free colorimetric detection of DNAs/RNAs based on self-assembling target species in the presence of thiolated probes. Two complementary thiol- modified probes, each of which specifically binds at one half of the target introduced SH groups at both ends of dsDNA. Continuous disulfide bond formation at 3‧ and 5‧ terminals of targets leads to the self-assembly of dsDNAs into the sulfur- rich and flexible products with different lengths. These products have a high affinity for the surface of Au-NPs and efficiently protect the surface from salt induced aggregation. To evaluate the assay efficacy, a small part of the citrus tristeza virus (CTV) genome was targeted, leading to a detection limit of about 5 × 10-9 mol.L-1 over a linear ranged from 20 × 10-9 to 10 × 10-7 mol.L-1. This approach also exhibits good reproducibility and recovery levels in the presence of plant total RNA or human plasma total circulating RNA extracts. Self-assembled targets can be then sensitively distinguished from non-assembled or mismatched targets after gel electrophoresis. The disulfide reaction method and integrating self-assembled DNAs/RNAs targets with bare AuNPs as a sensitive indicator provide us a powerful and simple visual detection tool for a wide range of applications.
Zehetleitner, Michael; Proulx, Michael J; Müller, Hermann J
2009-11-01
In efficient search for feature singleton targets, additional singletons (ASs) defined in a nontarget dimension are frequently found to interfere with performance. All search tasks that are processed via a spatial saliency map of the display would be predicted to be subject to such AS interference. In contrast, dual-route models, such as feature integration theory, assume that singletons are detected not via a saliency map, but via a nonspatial route that is immune to interference from cross-dimensional ASs. Consistent with this, a number of studies have reported absent interference effects in detection tasks. However, recent work suggests that the failure to find such effects may be due to the particular frequencies at which ASs were presented, as well as to their relative saliency. These two factors were examined in the present study. In contrast to previous reports, cross-dimensional ASs were found to slow detection (target-present and target-absent) responses, modulated by both their frequency of occurrence and saliency (relative to the target). These findings challenge dual-route models and support single-route models, such as dimension weighting and guided search.
Testing Saliency Parameters for Automatic Target Recognition
NASA Technical Reports Server (NTRS)
Pandya, Sagar
2012-01-01
A bottom-up visual attention model (the saliency model) is tested to enhance the performance of Automated Target Recognition (ATR). JPL has developed an ATR system that identifies regions of interest (ROI) using a trained OT-MACH filter, and then classifies potential targets as true- or false-positives using machine-learning techniques. In this project, saliency is used as a pre-processing step to reduce the space for performing OT-MACH filtering. Saliency parameters, such as output level and orientation weight, are tuned to detect known target features. Preliminary results are promising and future work entails a rigrous and parameter-based search to gain maximum insight about this method.
Loughman, James; Davison, Peter; Flitcroft, Ian
2007-11-01
Preattentive visual search (PAVS) describes rapid and efficient retinal and neural processing capable of immediate target detection in the visual field. Damage to the nerve fibre layer or visual pathway might reduce the efficiency with which the visual system performs such analysis. The purpose of this study was to test the hypothesis that patients with glaucoma are impaired on parallel search tasks, and that this would serve to distinguish glaucoma in early cases. Three groups of observers (glaucoma patients, suspect and normal individuals) were examined, using computer-generated flicker, orientation, and vertical motion displacement targets to assess PAVS efficiency. The task required rapid and accurate localisation of a singularity embedded in a field of 119 homogeneous distractors on either the left or right-hand side of a computer monitor. All subjects also completed a choice reaction time (CRT) task. Independent sample T tests revealed PAVS efficiency to be significantly impaired in the glaucoma group compared with both normal and suspect individuals. Performance was impaired in all types of glaucoma tested. Analysis between normal and suspect individuals revealed a significant difference only for motion displacement response times. Similar analysis using a PAVS/CRT index confirmed the glaucoma findings but also showed statistically significant differences between suspect and normal individuals across all target types. A test of PAVS efficiency appears capable of differentiating early glaucoma from both normal and suspect cases. Analysis incorporating a PAVS/CRT index enhances the diagnostic capacity to differentiate normal from suspect cases.
Casual Video Games as Training Tools for Attentional Processes in Everyday Life.
Stroud, Michael J; Whitbourne, Susan Krauss
2015-11-01
Three experiments examined the attentional components of the popular match-3 casual video game, Bejeweled Blitz (BJB). Attentionally demanding, BJB is highly popular among adults, particularly those in middle and later adulthood. In experiment 1, 54 older adults (Mage = 70.57) and 33 younger adults (Mage = 19.82) played 20 rounds of BJB, and completed online tasks measuring reaction time, simple visual search, and conjunction visual search. Prior experience significantly predicted BJB scores for younger adults, but for older adults, both prior experience and simple visual search task scores predicted BJB performance. Experiment 2 tested whether BJB practice alone would result in a carryover benefit to a visual search task in a sample of 58 young adults (Mage = 19.57) who completed 0, 10, or 30 rounds of BJB followed by a BJB-like visual search task with targets present or absent. Reaction times were significantly faster for participants who completed 30 but not 10 rounds of BJB compared with the search task only. This benefit was evident when targets were both present and absent, suggesting that playing BJB improves not only target detection, but also the ability to quit search effectively. Experiment 3 tested whether the attentional benefit in experiment 2 would apply to non-BJB stimuli. The results revealed a similar numerical but not significant trend. Taken together, the findings suggest there are benefits of casual video game playing to attention and relevant everyday skills, and that these games may have potential value as training tools.
Liang, Linlin; Lan, Feifei; Yin, Xuemei; Ge, Shenguang; Yu, Jinghua; Yan, Mei
2017-09-15
Convenient biosensor for simultaneous multi-analyte detection was increasingly required in biological analysis. A novel flower-like silver (FLS)-enhanced fluorescence/visual bimodal platform for the ultrasensitive detection of multiple miRNAs was successfully constructed for the first time based on the principle of multi-channel microfluidic paper-based analytical devices (µPADs). Fluorophore-functionalized DNA 1 (DNA 1 -N-CDs) was combined with FLS, which was hybridized with quencher-carrying strand (DNA 2 -CeO 2 ) to form FLS-enhanced fluorescence biosensor. Upon the addition of the target miRNA, the fluorescent intensity of DNA 1 -N-CDs within the proximity of the FLS was strengthened. The disengaged DNA/CeO 2 complex could result in color change after joining H 2 O 2 , leading to real-time visual detection of miRNA firstly. If necessary, then the fluorescence method was applied for a accurate determination. In this strategy, the growth of FLS in µPADs not only reduced the background fluorescence but also provided an enrichment of "hot spots" for surface enhanced fluorescence detection of miRNAs. Results also showed versatility of the FLS in the enhancement of sensitivity and selectivity of the miRNA biosensor. Remarkably, this biosensor could detect as low as 0.03fM miRNA210 and 0.06fM miRNA21. Interestingly, the proposed biosensor also possessed good capability of recycling in three cycles upon change of the supplementation of DNA 2 -CeO 2 and visual substitutive device. This method opened new opportunities for further studies of miRNA related bioprocesses and will provide a new instrument for simultaneous detection of multiple low-level biomarkers. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang, Lin; Liu, Zhanmin; Xia, Xueying; Yang, Cuiyun; Huang, Junyi; Wan, Sibao
2017-05-01
Cucumber green mottle mosaic virus (CGMMV)causes a severe mosaic symptom of watermelon and cucumber, and can be transmitted via infected cucumber seeds, leaves and soil. It remains a challenge to detect this virus to prevent its introduction and infection and spread in fields. For this purpose, a simple and sensitive label-free colorimetric detection method for CGMMV has been developed with unmodified gold nanoparticles (AuNPs) as colorimetric probes. The method is based on the finding that the presence of RT-PCR target products of CGMMV and species-specific probes results in color change of AuNPs from red to blue after NaCl induction. Normally, species-specific probes attach to the surface of AuNPs and thereby increasing their resistance to NaCl-induced aggregation. The concentration of sodium, probes in the reaction system and evaluation of specificity and sensitivity of a novel assay, visual detection of Cucumber green mottle mosaic virus using unmodified AuNPs has been carried out with simple preparation of samples in our study. Through this assay, as low as 30pg/μL of CGMMV RNA was thus detected visually, by the naked eye, without the need for any sophisticated, expensive instrumentation and biochemical reagents. The specificity was 100% and exhibited good reproducibility in our assays. The results note that this assay is highly species-specific, simple, low-cost, and visual for easy detection of CGMMV in plant tissues. Therefore, visual assay is a potentially useful tool for middle or small-scales corporations and entry-exit inspection and quarantine bureau to detect CGMMV in cucumber seeds or plant tissues. Copyright © 2017 Elsevier B.V. All rights reserved.
Nkere, Chukwuemeka K; Oyekanmi, Joshua O; Silva, Gonçalo; Bömer, Moritz; Atiri, Gabriel I; Onyeka, Joseph; Maroya, Norbert G; Seal, Susan E; Kumar, P Lava
2018-04-01
A closed-tube reverse transcription loop-mediated isothermal amplification (CT-RT-LAMP) assay was developed for the detection of yam mosaic virus (YMV, genus Potyvirus) infecting yam (Dioscorea spp.). The assay uses a set of six oligonucleotide primers targeting the YMV coat protein region, and the amplification products in YMV-positive samples are visualized by chromogenic detection with SYBR Green I dye. The CT-RT-LAMP assay detected YMV in leaf and tuber tissues of infected plants. The assay is 100 times more sensitive in detecting YMV than standard RT-PCR, while maintaining the same specificity.
Evidence for unlimited capacity processing of simple features in visual cortex
White, Alex L.; Runeson, Erik; Palmer, John; Ernst, Zachary R.; Boynton, Geoffrey M.
2017-01-01
Performance in many visual tasks is impaired when observers attempt to divide spatial attention across multiple visual field locations. Correspondingly, neuronal response magnitudes in visual cortex are often reduced during divided compared with focused spatial attention. This suggests that early visual cortex is the site of capacity limits, where finite processing resources must be divided among attended stimuli. However, behavioral research demonstrates that not all visual tasks suffer such capacity limits: The costs of divided attention are minimal when the task and stimulus are simple, such as when searching for a target defined by orientation or contrast. To date, however, every neuroimaging study of divided attention has used more complex tasks and found large reductions in response magnitude. We bridged that gap by using functional magnetic resonance imaging to measure responses in the human visual cortex during simple feature detection. The first experiment used a visual search task: Observers detected a low-contrast Gabor patch within one or four potentially relevant locations. The second experiment used a dual-task design, in which observers made independent judgments of Gabor presence in patches of dynamic noise at two locations. In both experiments, blood-oxygen level–dependent (BOLD) signals in the retinotopic cortex were significantly lower for ignored than attended stimuli. However, when observers divided attention between multiple stimuli, BOLD signals were not reliably reduced and behavioral performance was unimpaired. These results suggest that processing of simple features in early visual cortex has unlimited capacity. PMID:28654964
Words, shape, visual search and visual working memory in 3-year-old children.
Vales, Catarina; Smith, Linda B
2015-01-01
Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.
Advancing Water Science through Data Visualization
NASA Astrophysics Data System (ADS)
Li, X.; Troy, T.
2014-12-01
As water scientists, we are increasingly handling larger and larger datasets with many variables, making it easy to lose ourselves in the details. Advanced data visualization will play an increasingly significant role in propelling the development of water science in research, economy, policy and education. It can enable analysis within research and further data scientists' understanding of behavior and processes and can potentially affect how the public, whom we often want to inform, understands our work. Unfortunately for water scientists, data visualization is approached in an ad hoc manner when a more formal methodology or understanding could potentially significantly improve both research within the academy and outreach to the public. Firstly to broaden and deepen scientific understanding, data visualization can allow for more analyzed targets to be processed simultaneously and can represent the variables effectively, finding patterns, trends and relationships; thus it can even explores the new research direction or branch of water science. Depending on visualization, we can detect and separate the pivotal and trivial influential factors more clearly to assume and abstract the original complex target system. Providing direct visual perception of the differences between observation data and prediction results of models, data visualization allows researchers to quickly examine the quality of models in water science. Secondly data visualization can also improve public awareness and perhaps influence behavior. Offering decision makers clearer perspectives of potential profits of water, data visualization can amplify the economic value of water science and also increase relevant employment rates. Providing policymakers compelling visuals of the role of water for social and natural systems, data visualization can advance the water management and legislation of water conservation. By building the publics' own data visualization through apps and games about water science, they can absorb the knowledge about water indirectly and incite the awareness of water problems.
Reaching back: the relative strength of the retroactive emotional attentional blink
Ní Choisdealbha, Áine; Piech, Richard M.; Fuller, John K.; Zald, David H.
2017-01-01
Visual stimuli with emotional content appearing in close temporal proximity either before or after a target stimulus can hinder conscious perceptual processing of the target via an emotional attentional blink (EAB). This occurs for targets that appear after the emotional stimulus (forward EAB) and for those appearing before the emotional stimulus (retroactive EAB). Additionally, the traditional attentional blink (AB) occurs because detection of any target hinders detection of a subsequent target. The present study investigated the relations between these different attentional processes. Rapid sequences of landscape images were presented to thirty-one male participants with occasional landscape targets (rotated images). For the forward EAB, emotional or neutral distractor images of people were presented before the target; for the retroactive EAB, such images were also targets and presented after the landscape target. In the latter case, this design allowed investigation of the AB as well. Erotic and gory images caused more EABs than neutral images, but there were no differential effects on the AB. This pattern is striking because while using different target categories (rotated landscapes, people) appears to have eliminated the AB, the retroactive EAB still occurred, offering additional evidence for the power of emotional stimuli over conscious attention. PMID:28255172
Drew, Trafton; Cunningham, Corbin; Wolfe, Jeremy
2012-01-01
Rational and Objectives Computer Aided Detection (CAD) systems are intended to improve performance. This study investigates how CAD might actually interfere with a visual search task. This is a laboratory study with implications for clinical use of CAD. Methods 47 naïve observers in two studies were asked to search for a target, embedded in 1/f2.4 noise while we monitored their eye-movements. For some observers, a CAD system marked 75% of targets and 10% of distractors while other observers completed the study without CAD. In Experiment 1, the CAD system’s primary function was to tell observers where the target might be. In Experiment 2, CAD provided information about target identity. Results In Experiment 1, there was a significant enhancement of observer sensitivity in the presence of CAD (t(22)=4.74, p<.001), but there was also a substantial cost. Targets that were not marked by the CAD system were missed more frequently than equivalent targets in No CAD blocks of the experiment (t(22)=7.02, p<.001). Experiment 2 showed no behavioral benefit from CAD, but also no significant cost on sensitivity to unmarked targets (t(22)=0.6, p=n.s.). Finally, in both experiments, CAD produced reliable changes in eye-movements: CAD observers examined a lower total percentage of the search area than the No CAD observers (Ex 1: t(48)=3.05, p<.005; Ex 2: t(50)=7.31, p<.001). Conclusions CAD signals do not combine with observers’ unaided performance in a straight-forward manner. CAD can engender a sense of certainty that can lead to incomplete search and elevated chances of missing unmarked stimuli. PMID:22958720
Effect of Age and Glaucoma on the Detection of Darks and Lights
Zhao, Linxi; Sendek, Caroline; Davoodnia, Vandad; Lashgari, Reza; Dul, Mitchell W.; Zaidi, Qasim; Alonso, Jose-Manuel
2015-01-01
Purpose We have shown previously that normal observers detect dark targets faster and more accurately than light targets, when presented in noisy backgrounds. We investigated how these differences in detection time and accuracy are affected by age and ganglion cell pathology associated with glaucoma. Methods We asked 21 glaucoma patients, 21 age-similar controls, and 5 young control observers to report as fast as possible the number of 1 to 3 light or dark targets. The targets were positioned at random in a binary noise background, within the central 30° of the visual field. Results We replicate previous findings that darks are detected faster and more accurately than lights. We extend these findings by demonstrating that differences in detection of darks and lights are found reliably across different ages and in observers with glaucoma. We show that differences in detection time increase at a rate of approximately 55 msec/dB at early stages of glaucoma and then remain constant at later stages at approximately 800 msec. In normal subjects, differences in detection time increase with age at a rate of approximately 8 msec/y. We also demonstrate that the accuracy to detect lights and darks is significantly correlated with the severity of glaucoma and that the mean detection time is significantly longer for subjects with glaucoma than age-similar controls. Conclusions We conclude that differences in detection of darks and lights can be demonstrated over a wide range of ages, and asymmetries in dark/light detection increase with age and early stages of glaucoma. PMID:26513506
Effect of Age and Glaucoma on the Detection of Darks and Lights.
Zhao, Linxi; Sendek, Caroline; Davoodnia, Vandad; Lashgari, Reza; Dul, Mitchell W; Zaidi, Qasim; Alonso, Jose-Manuel
2015-10-01
We have shown previously that normal observers detect dark targets faster and more accurately than light targets, when presented in noisy backgrounds. We investigated how these differences in detection time and accuracy are affected by age and ganglion cell pathology associated with glaucoma. We asked 21 glaucoma patients, 21 age-similar controls, and 5 young control observers to report as fast as possible the number of 1 to 3 light or dark targets. The targets were positioned at random in a binary noise background, within the central 30° of the visual field. We replicate previous findings that darks are detected faster and more accurately than lights. We extend these findings by demonstrating that differences in detection of darks and lights are found reliably across different ages and in observers with glaucoma. We show that differences in detection time increase at a rate of approximately 55 msec/dB at early stages of glaucoma and then remain constant at later stages at approximately 800 msec. In normal subjects, differences in detection time increase with age at a rate of approximately 8 msec/y. We also demonstrate that the accuracy to detect lights and darks is significantly correlated with the severity of glaucoma and that the mean detection time is significantly longer for subjects with glaucoma than age-similar controls. We conclude that differences in detection of darks and lights can be demonstrated over a wide range of ages, and asymmetries in dark/light detection increase with age and early stages of glaucoma.
A new approach for SSVEP detection using PARAFAC and canonical correlation analysis.
Tello, Richard; Pouryazdian, Saeed; Ferreira, Andre; Beheshti, Soosan; Krishnan, Sridhar; Bastos, Teodiano
2015-01-01
This paper presents a new way for automatic detection of SSVEPs through correlation analysis between tensor models. 3-way EEG tensor of channel × frequency × time is decomposed into constituting factor matrices using PARAFAC model. PARAFAC analysis of EEG tensor enables us to decompose multichannel EEG into constituting temporal, spectral and spatial signatures. SSVEPs characterized with localized spectral and spatial signatures are then detected exploiting a correlation analysis between extracted signatures of the EEG tensor and the corresponding simulated signatures of all target SSVEP signals. The SSVEP that has the highest correlation is selected as the intended target. Two flickers blinking at 8 and 13 Hz were used as visual stimuli and the detection was performed based on data packets of 1 second without overlapping. Five subjects participated in the experiments and the highest classification rate of 83.34% was achieved, leading to the Information Transfer Rate (ITR) of 21.01 bits/min.
Probe Scanning Support System by a Parallel Mechanism for Robotic Echography
NASA Astrophysics Data System (ADS)
Aoki, Yusuke; Kaneko, Kenta; Oyamada, Masami; Takachi, Yuuki; Masuda, Kohji
We propose a probe scanning support system based on force/visual servoing control for robotic echography. First, we have designed and formulated its inverse kinematics the construction of mechanism. Next, we have developed a scanning method of the ultrasound probe on body surface to construct visual servo system based on acquired echogram by the standalone medical robot to move the ultrasound probe on patient abdomen in three-dimension. The visual servo system detects local change of brightness in time series echogram, which is stabilized the position of the probe by conventional force servo system in the robot, to compensate not only periodical respiration motion but also body motion. Then we integrated control method of the visual servo with the force servo as a hybrid control in both of position and force. To confirm the ability to apply for actual abdomen, we experimented the total system to follow the gallbladder as a moving target to keep its position in the echogram by minimizing variation of reaction force on abdomen. As the result, the system has a potential to be applied to automatic detection of human internal organ.
The wisdom of crowds for visual search
Juni, Mordechai Z.; Eckstein, Miguel P.
2017-01-01
Decision-making accuracy typically increases through collective integration of people’s judgments into group decisions, a phenomenon known as the wisdom of crowds. For simple perceptual laboratory tasks, classic signal detection theory specifies the upper limit for collective integration benefits obtained by weighted averaging of people’s confidences, and simple majority voting can often approximate that limit. Life-critical perceptual decisions often involve searching large image data (e.g., medical, security, and aerial imagery), but the expected benefits and merits of using different pooling algorithms are unknown for such tasks. Here, we show that expected pooling benefits are significantly greater for visual search than for single-location perceptual tasks and the prediction given by classic signal detection theory. In addition, we show that simple majority voting obtains inferior accuracy benefits for visual search relative to averaging and weighted averaging of observers’ confidences. Analysis of gaze behavior across observers suggests that the greater collective integration benefits for visual search arise from an interaction between the foveated properties of the human visual system (high foveal acuity and low peripheral acuity) and observers’ nonexhaustive search patterns, and can be predicted by an extended signal detection theory framework with trial to trial sampling from a varying mixture of high and low target detectabilities across observers (SDT-MIX). These findings advance our theoretical understanding of how to predict and enhance the wisdom of crowds for real world search tasks and could apply more generally to any decision-making task for which the minority of group members with high expertise varies from decision to decision. PMID:28490500
Synthesis and bio-applications of targeted magnetic-fluorescent composite nanoparticles
NASA Astrophysics Data System (ADS)
Xia, Hui; Tong, Ruijie; Song, Yanling; Xiong, Fang; Li, Jiman; Wang, Shichao; Fu, Huihui; Wen, Jirui; Li, Dongze; Zeng, Ye; Zhao, Zhiwei; Wu, Jiang
2017-04-01
Magnetic-fluorescent nanoparticles have a tremendous potential in biology. As the benefits of these materials gained recognition, increasing attention has been given to the conjugation of magnetic-fluorescent nanoparticles with targeting ligands. The magnetic and fluorescent properties of nanoparticles offer several functionalities, including imaging, separation, and visualization, while the presence of a targeting ligand allows for selective cell and tissue targeting. In this review, methods for the synthesis of targeted magnetic-fluorescent nanoparticles are explored, and recent applications of these nanocomposites to the detection and separation of biomolecules, fluorescent and magnetic resonance imaging, and cancer diagnosis and treatment will be summarized. As these materials are further optimized, targeted magnetic-fluorescent nanoparticles hold great promise for the diagnosis and treatment of some diseases.
Individual differences in working memory capacity and workload capacity.
Yu, Ju-Chi; Chang, Ting-Yun; Yang, Cheng-Ta
2014-01-01
We investigated the relationship between working memory capacity (WMC) and workload capacity (WLC). Each participant performed an operation span (OSPAN) task to measure his/her WMC and three redundant-target detection tasks to measure his/her WLC. WLC was computed non-parametrically (Experiments 1 and 2) and parametrically (Experiment 2). Both levels of analyses showed that participants high in WMC had larger WLC than those low in WMC only when redundant information came from visual and auditory modalities, suggesting that high-WMC participants had superior processing capacity in dealing with redundant visual and auditory information. This difference was eliminated when multiple processes required processing for only a single working memory subsystem in a color-shape detection task and a double-dot detection task. These results highlighted the role of executive control in integrating and binding information from the two working memory subsystems for perceptual decision making.
Chromatic Perceptual Learning but No Category Effects without Linguistic Input.
Grandison, Alexandra; Sowden, Paul T; Drivonikou, Vicky G; Notman, Leslie A; Alexander, Iona; Davies, Ian R L
2016-01-01
Perceptual learning involves an improvement in perceptual judgment with practice, which is often specific to stimulus or task factors. Perceptual learning has been shown on a range of visual tasks but very little research has explored chromatic perceptual learning. Here, we use two low level perceptual threshold tasks and a supra-threshold target detection task to assess chromatic perceptual learning and category effects. Experiment 1 investigates whether chromatic thresholds reduce as a result of training and at what level of analysis learning effects occur. Experiment 2 explores the effect of category training on chromatic thresholds, whether training of this nature is category specific and whether it can induce categorical responding. Experiment 3 investigates the effect of category training on a higher level, lateralized target detection task, previously found to be sensitive to category effects. The findings indicate that performance on a perceptual threshold task improves following training but improvements do not transfer across retinal location or hue. Therefore, chromatic perceptual learning is category specific and can occur at relatively early stages of visual analysis. Additionally, category training does not induce category effects on a low level perceptual threshold task, as indicated by comparable discrimination thresholds at the newly learned hue boundary and adjacent test points. However, category training does induce emerging category effects on a supra-threshold target detection task. Whilst chromatic perceptual learning is possible, learnt category effects appear to be a product of left hemisphere processing, and may require the input of higher level linguistic coding processes in order to manifest.
Object acquisition and tracking for space-based surveillance
NASA Astrophysics Data System (ADS)
1991-11-01
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase 1) and N00014-89-C-0015 (Phase 2). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processing into time dependent, object dependent, and data dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.
Object acquisition and tracking for space-based surveillance. Final report, Dec 88-May 90
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-11-27
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase I) and N00014-89-C-0015 (Phase II). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processingmore » into time dependent, object-dependent, and data-dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.« less
Adzemovic, Milena Z; Zeitelhofer, Manuel; Leisser, Marianne; Köck, Ulricke; Kury, Angela; Olsson, Tomas
2016-11-14
Immunohistochemistry (IHC) provides highly specific, reliable and attractive protein visualization. Correct performance and interpretation of an IHC-based multicolor labeling is challenging, especially when utilized for assessing interrelations between target proteins in the tissue with a high fat content such as the central nervous system (CNS). Our protocol represents a refinement of the standard immunolabeling technique particularly adjusted for detection of both structural and soluble proteins in the rat CNS and peripheral lymph nodes (LN) affected by neuroinflammation. Nonetheless, with or without further modifications, our protocol could likely be used for detection of other related protein targets, even in other organs and species than here presented.
Analysis of EEG Related Saccadic Eye Movement
NASA Astrophysics Data System (ADS)
Funase, Arao; Kuno, Yoshiaki; Okuma, Shigeru; Yagi, Tohru
Our final goal is to establish the model for saccadic eye movement that connects the saccade and the electroencephalogram(EEG). As the first step toward this goal, we recorded and analyzed the saccade-related EEG. In the study recorded in this paper, we tried detecting a certain EEG that is peculiar to the eye movement. In these experiments, each subject was instructed to point their eyes toward visual targets (LEDs) or the direction of the sound sources (buzzers). In the control cases, the EEG was recorded in the case of no eye movemens. As results, in the visual experiments, we found that the potential of EEG changed sharply on the occipital lobe just before eye movement. Furthermore, in the case of the auditory experiments, similar results were observed. In the case of the visual experiments and auditory experiments without eye movement, we could not observed the EEG changed sharply. Moreover, when the subject moved his/her eyes toward a right-side target, a change in EEG potential was found on the right occipital lobe. On the contrary, when the subject moved his/her eyes toward a left-side target, a sharp change in EEG potential was found on the left occipital lobe.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.
Maravall, Darío; de Lope, Javier; Fuentes, Juan P.
2017-01-01
We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394
Cameron, E Leslie; Tai, Joanna C; Eckstein, Miguel P; Carrasco, Marisa
2004-01-01
Adding distracters to a display impairs performance on visual tasks (i.e. the set-size effect). While keeping the display characteristics constant, we investigated this effect in three tasks: 2 target identification, yes-no detection with 2 targets, and 8-alternative localization. A Signal Detection Theory (SDT) model, tailored for each task, accounts for the set-size effects observed in identification and localization tasks, and slightly under-predicts the set-size effect in a detection task. Given that sensitivity varies as a function of spatial frequency (SF), we measured performance in each of these three tasks in neutral and peripheral precue conditions for each of six spatial frequencies (0.5-12 cpd). For all spatial frequencies tested, performance on the three tasks decreased as set size increased in the neutral precue condition, and the peripheral precue reduced the effect. Larger set-size effects were observed at low SFs in the identification and localization tasks. This effect can be described using the SDT model, but was not predicted by it. For each of these tasks we also established the extent to which covert attention modulates performance across a range of set sizes. A peripheral precue substantially diminished the set-size effect and improved performance, even at set size 1. These results provide support for distracter exclusion, and suggest that signal enhancement may also be a mechanism by which covert attention can impose its effect.
Javier, David J.; Castellanos-Gonzalez, Alejandro; Weigum, Shannon E.; White, A. Clinton; Richards-Kortum, Rebecca
2009-01-01
We report on a novel strategy for the detection of mRNA targets derived from Cryptosporidium parvum oocysts by the use of oligonucleotide-gold nanoparticles. Gold nanoparticles are functionalized with oligonucleotides which are complementary to unique sequences present on the heat shock protein 70 (HSP70) DNA/RNA target. The results indicate that the presence of HPS70 targets of increasing complexity causes the formation of oligonucleotide-gold nanoparticle networks which can be visually monitored via a simple colorimetric readout measured by a total internal reflection imaging setup. Furthermore, the induced expression of HSP70 mRNA in Cryptosporidium parvum oocysts via a simple heat shock process provides nonenzymatic amplification such that the HSP70 mRNA derived from as few as 5 × 103 purified C. parvum oocysts was successfully detected. Taken together, these results support the use of oligonucleotide-gold nanoparticles for the molecular diagnosis of cryptosporidiosis, offering new opportunities for the further development of point-of-care diagnostic assays with low-cost, robust reagents and simple colorimetric detection. PMID:19828740
Study of target and non-target interplay in spatial attention task.
Sweeti; Joshi, Deepak; Panigrahi, B K; Anand, Sneh; Santhosh, Jayasree
2018-02-01
Selective visual attention is the ability to selectively pay attention to the targets while inhibiting the distractors. This paper aims to study the targets and non-targets interplay in spatial attention task while subject attends to the target object present in one visual hemifield and ignores the distractor present in another visual hemifield. This paper performs the averaged evoked response potential (ERP) analysis and time-frequency analysis. ERP analysis agrees to the left hemisphere superiority over late potentials for the targets present in right visual hemifield. Time-frequency analysis performed suggests two parameters i.e. event-related spectral perturbation (ERSP) and inter-trial coherence (ITC). These parameters show the same properties for the target present in either of the visual hemifields but show the difference while comparing the activity corresponding to the targets and non-targets. In this way, this study helps to visualise the difference between targets present in the left and right visual hemifields and, also the targets and non-targets present in the left and right visual hemifields. These results could be utilised to monitor subjects' performance in brain-computer interface (BCI) and neurorehabilitation.
The mechanisms of collinear integration.
Cass, John; Alais, David
2006-08-11
Low-contrast visual contour fragments are easier to detect when presented in the context of nearby collinear contour elements (U. Polat & D. Sagi, 1993). The spatial and temporal determinants of this collinear facilitation have been studied extensively (J. R. Cass & B. Spehar, 2005; Y. Tanaka & D. Sagi, 1998; C. B. Williams & R. F. Hess, 1998), although considerable debate surrounds the neural mechanisms underlying it. Our study examines this question using a novel stimulus, whereby the flanking "contour" elements are rotated around their own axis. By measuring contrast detection thresholds to a brief foveal target presented at various phases of flanker rotation, we find peak facilitation after flankers have rotated beyond their collinear phase. This optimal facilitative delay increases monotonically as a function of target-flanker separation, yielding estimates of cortical propagation of 0.1 m/s, a value highly consistent with the dynamics of long-range horizontal interactions observed within primary visual cortex (V1). A curious new finding is also observed: Facilitative peaks also occur when the target flash precedes flanker collinearity by 20-80 ms, a range consistent with contrast-dependent cortical onset latencies. Together, these data suggest that collinear facilitation involves two separate mechanisms, each possessing distinct dynamics: (i) slowly propagating horizontal interactions within V1 and (ii) a faster integrative mechanism, possibly driven by synchronous collinear cortical onset.
Effect of display size on visual attention.
Chen, I-Ping; Liao, Chia-Ning; Yeh, Shih-Hao
2011-06-01
Attention plays an important role in the design of human-machine interfaces. However, current knowledge about attention is largely based on data obtained when using devices of moderate display size. With advancement in display technology comes the need for understanding attention behavior over a wider range of viewing sizes. The effect of display size on test participants' visual search performance was studied. The participants (N = 12) performed two types of visual search tasks, that is, parallel and serial search, under three display-size conditions (16 degrees, 32 degrees, and 60 degrees). Serial, but not parallel, search was affected by display size. In the serial task, mean reaction time for detecting a target increased with the display size.
2017-12-01
values designating each stimulus as a target ( true ) or nontarget (false). Both stim_time and stim_label should have length equal to the number of...position unless so designated by other authorized documents. Citation of manufacturer’s or trade names does not constitute an official endorsement or...depend strongly on the true values of hit rate and false-alarm rate. Based on its better estimation of hit rate and false-alarm rate, the regression
Thinking of God Moves Attention
ERIC Educational Resources Information Center
Chasteen, Alison L.; Burdzy, Donna C.; Pratt, Jay
2010-01-01
The concepts of God and Devil are well known across many cultures and religions, and often involve spatial metaphors, but it is not well known if our mental representations of these concepts affect visual cognition. To examine if exposure to divine concepts produces shifts of attention, participants completed a target detection task in which they…
Robust visual tracking using a contextual boosting approach
NASA Astrophysics Data System (ADS)
Jiang, Wanyue; Wang, Yin; Wang, Daobo
2018-03-01
In recent years, detection-based image trackers have been gaining ground rapidly, thanks to its capacity of incorporating a variety of image features. Nevertheless, its tracking performance might be compromised if background regions are mislabeled as foreground in the training process. To resolve this problem, we propose an online visual tracking algorithm designated to improving the training label accuracy in the learning phase. In the proposed method, superpixels are used as samples, and their ambiguous labels are reassigned in accordance with both prior estimation and contextual information. The location and scale of the target are usually determined by confidence map, which is doomed to shrink since background regions are always incorporated into the bounding box. To address this dilemma, we propose a cross projection scheme via projecting the confidence map for target detecting. Moreover, the performance of the proposed tracker can be further improved by adding rigid-structured information. The proposed method is evaluated on the basis of the OTB benchmark and the VOT2016 benchmark. Compared with other trackers, the results appear to be competitive.
Developing and evaluating a target-background similarity metric for camouflage detection.
Lin, Chiuhsiang Joe; Chang, Chi-Chan; Liu, Bor-Shong
2014-01-01
Measurement of camouflage performance is of fundamental importance for military stealth applications. The goal of camouflage assessment algorithms is to automatically assess the effect of camouflage in agreement with human detection responses. In a previous study, we found that the Universal Image Quality Index (UIQI) correlated well with the psychophysical measures, and it could be a potentially camouflage assessment tool. In this study, we want to quantify the camouflage similarity index and psychophysical results. We compare several image quality indexes for computational evaluation of camouflage effectiveness, and present the results of an extensive human visual experiment conducted to evaluate the performance of several camouflage assessment algorithms and analyze the strengths and weaknesses of these algorithms. The experimental data demonstrates the effectiveness of the approach, and the correlation coefficient result of the UIQI was higher than those of other methods. This approach was highly correlated with the human target-searching results. It also showed that this method is an objective and effective camouflage performance evaluation method because it considers the human visual system and image structure, which makes it consistent with the subjective evaluation results.
Multisensory and Modality-Specific Influences on Adaptation to Optical Prisms
Calzolari, Elena; Albini, Federica; Bolognini, Nadia; Vallar, Giuseppe
2017-01-01
Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target. Visual, auditory, and multisensory – audio-visual – targets in the adaptation phase were used, while participants wore prisms displacing the visual field rightward by 10°. Proprioceptive, visual, visual-proprioceptive, auditory-proprioceptive straight-ahead shifts were measured. Pointing to auditory and to audio-visual targets in the adaptation phase produces proprioceptive, visual-proprioceptive, and auditory-proprioceptive AEs, as the typical visual targets did. This finding reveals that cross-modal plasticity effects involve both the auditory and the visual modality, and their interactions (Experiment 1). Even a shortened PA phase, requiring only 24 pointings to visual and audio-visual targets (Experiment 2), is sufficient to bring about AEs, as compared to the standard 92-pointings procedure. Finally, pointings to auditory targets cause AEs, although PA with a reduced number of pointings (24) to auditory targets brings about smaller AEs, as compared to the 92-pointings procedure (Experiment 3). Together, results from the three experiments extend to the auditory modality the sensorimotor plasticity underlying the typical AEs produced by PA to visual targets. Importantly, PA to auditory targets appears characterized by less accurate pointings and error correction, suggesting that the auditory component of the PA process may be less central to the building up of the AEs, than the sensorimotor pointing activity per se. These findings highlight both the effectiveness of a reduced number of pointings for bringing about AEs, and the possibility of inducing PA with auditory targets, which may be used as a compensatory route in patients with visual deficits. PMID:29213233
Qing, Zhihe; Mao, Zhengui; Qing, Taiping; He, Xiaoxiao; Zou, Zhen; He, Dinggeng; Shi, Hui; Huang, Jin; Liu, Jianbo; Wang, Kemin
2014-11-18
Due to its importance to develop strategies for copper(II) (Cu(2+)) detection, we here report a visual and portable strategy for Cu(2+) detection based on designing and using a strip-like hydrogel. The hydrogel is functionalized through caging poly(thymine) as probes, which can effectively template the formation of fluorescent copper nanoparticles (CuNPs) in the presence of the reductant (ascorbate) and Cu(2+). On the hydrogel's surface, uniform wells of microliter volume (microwells) are printed for sample-injection. When the injected sample is stained by Cu(2+), fluorescent CuNPs will be in situ templated by poly T in the hydrogel. With ultraviolet (UV) irradiation, the red fluorescence of CuNPs can be observed by naked-eye and recorded by a common camera without complicated instruments. Thus, the strategy integrates sample-injection, reaction and indication with fast signal response, providing an add-and-read manner for visual and portable detection of Cu(2+), as well as a strip-like strategy. Detection ability with a detectable minimum concentration of 20 μM and practically applicable properties have been demonstrated, such as resistance to environmental interference and good constancy, indicating that the strategy holds great potential and significance for popular detection of Cu(2+), especially in remote regions. We believe that the strip-like hydrogel-based methodology is also applicable to other targets by virtue of altering probes.
Carlisle, Nancy B.; Woodman, Geoffrey F.
2014-01-01
Biased competition theory proposes that representations in working memory drive visual attention to select similar inputs. However, behavioral tests of this hypothesis have led to mixed results. These inconsistent findings could be due to the inability of behavioral measures to reliably detect the early, automatic effects on attentional deployment that the memory representations exert. Alternatively, executive mechanisms may govern how working memory representations influence attention based on higher-level goals. In the present study, we tested these hypotheses using the N2pc component of participants’ event-related potentials (ERPs) to directly measure the early deployments of covert attention. Participants searched for a target in an array that sometimes contained a memory-matching distractor. In Experiments 1–3, we manipulated the difficulty of the target discrimination and the proximity of distractors, but consistently observed that covert attention was deployed to the search targets and not the memory-matching distractors. In Experiment 4, we showed that when participants’ goal involved attending to memory-matching items that these items elicited a large and early N2pc. Our findings demonstrate that working memory representations alone are not sufficient to guide early deployments of visual attention to matching inputs and that goal-dependent executive control mediates the interactions between working memory representations and visual attention. PMID:21254796
NASA Astrophysics Data System (ADS)
Bagheri, Zahra M.; Cazzolato, Benjamin S.; Grainger, Steven; O'Carroll, David C.; Wiederman, Steven D.
2017-08-01
Objective. Many computer vision and robotic applications require the implementation of robust and efficient target-tracking algorithms on a moving platform. However, deployment of a real-time system is challenging, even with the computational power of modern hardware. Lightweight and low-powered flying insects, such as dragonflies, track prey or conspecifics within cluttered natural environments, illustrating an efficient biological solution to the target-tracking problem. Approach. We used our recent recordings from ‘small target motion detector’ neurons in the dragonfly brain to inspire the development of a closed-loop target detection and tracking algorithm. This model exploits facilitation, a slow build-up of response to targets which move along long, continuous trajectories, as seen in our electrophysiological data. To test performance in real-world conditions, we implemented this model on a robotic platform that uses active pursuit strategies based on insect behaviour. Main results. Our robot performs robustly in closed-loop pursuit of targets, despite a range of challenging conditions used in our experiments; low contrast targets, heavily cluttered environments and the presence of distracters. We show that the facilitation stage boosts responses to targets moving along continuous trajectories, improving contrast sensitivity and detection of small moving targets against textured backgrounds. Moreover, the temporal properties of facilitation play a useful role in handling vibration of the robotic platform. We also show that the adoption of feed-forward models which predict the sensory consequences of self-movement can significantly improve target detection during saccadic movements. Significance. Our results provide insight into the neuronal mechanisms that underlie biological target detection and selection (from a moving platform), as well as highlight the effectiveness of our bio-inspired algorithm in an artificial visual system.
Detection and Delineation of Oral Cancer With a PARP1-Targeted Optical Imaging Agent.
Kossatz, Susanne; Weber, Wolfgang; Reiner, Thomas
2017-01-01
More sensitive and specific methods for early detection are imperative to improve survival rates in oral cancer. However, oral cancer detection is still largely based on visual examination and histopathology of biopsy material, offering no molecular selectivity or spatial resolution. Intuitively, the addition of optical contrast could improve oral cancer detection and delineation, but so far no molecularly targeted approach has been translated. Our fluorescently labeled small-molecule inhibitor PARPi-FL binds to the DNA repair enzyme poly(ADP-ribose)polymerase 1 (PARP1) and is a potential diagnostic aid for oral cancer delineation. Based on our preclinical work, a clinical phase I/II trial opened in March 2017 to evaluate PARPi-FL as a contrast agent for oral cancer imaging. In this commentary, we discuss why we chose PARP1 as a biomarker for tumor detection and which particular characteristics make PARPi-FL an excellent candidate to image PARP1 in optically guided applications. We also comment on the potential benefits of our molecularly targeted PARPi-FL-guided imaging approach in comparison to existing oral cancer screening adjuncts and mention the adaptability of PARPi-FL imaging to other environments and tumor types.
Steinkohl, F; Luger, A; Bektic, J; Aigner, F
2017-08-01
Prostate cancer is the most frequent cancer in men. The diagnosis is normally achieved by a systematic prostate biopsy; however, this is a randomized approach by which a substantial number of significant carcinomas go undetected. For this reason, in recent years imaging techniques have been continuously developed, which enable visualization and therefore targeted biopsies. The use of systematic biopsies is a standard procedure for the detection of prostate cancer. The quality of biopsies can be increased if the prostate is examined for the presence of suspected cancerous alterations during the biopsy. This can be carried out using multiparametric transrectral ultrasound. Multiparametric ultrasound within the framework of a targeted biopsy increases the detection rate of significant prostate carcinomas with a simultaneous decrease in detection of insignificant carcinomas; however, the diagnostic reliability and the evidence level of multiparametric transrectal ultrasound are not yet sufficiently high to be able to replace a systematic biopsy. In the hands of a well-trained examiner multiparametric transrectal ultrasound represents a good method for detection of prostate carcinomas. With the progression in technical developments of ultrasound technology, the detection rate will presumably be further increased.
Zimmer, Ulrike; Höfler, Margit; Koschutnig, Karl; Ischebeck, Anja
2016-07-01
For survival, it is necessary to attend quickly towards dangerous objects, but to turn away from something that is disgusting. We tested whether fear and disgust sounds direct spatial attention differently. Using fMRI, a sound cue (disgust, fear or neutral) was presented to the left or right ear. The cue was followed by a visual target (a small arrow) which was located on the same (valid) or opposite (invalid) side as the cue. Participants were required to decide whether the arrow pointed up- or downwards while ignoring the sound cue. Behaviorally, responses were faster for invalid compared to valid targets when cued by disgust, whereas the opposite pattern was observed for targets after fearful and neutral sound cues. During target presentation, activity in the visual cortex and IPL increased for targets invalidly cued with disgust, but for targets validly cued with fear which indicated a general modulation of activation due to attention. For the TPJ, an interaction in the opposite direction was observed, consistent with its role in detecting targets at unattended positions and in relocating attention. As a whole our results indicate that a disgusting sound directs spatial attention away from its location, in contrast to fearful and neutral sounds. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Attention distributed across sensory modalities enhances perceptual performance
Mishra, Jyoti; Gazzaley, Adam
2012-01-01
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811
Corcobado, Guadalupe; Trillo, Alejandro
2017-01-01
Our understanding of how floral visitors integrate visual and olfactory cues when seeking food, and how background complexity affects flower detection is limited. Here, we aimed to understand the use of visual and olfactory information for bumblebees (Bombus terrestris terrestris L.) when seeking flowers in a visually complex background. To explore this issue, we first evaluated the effect of flower colour (red and blue), size (8, 16 and 32 mm), scent (presence or absence) and the amount of training on the foraging strategy of bumblebees (accuracy, search time and flight behaviour), considering the visual complexity of our background, to later explore whether experienced bumblebees, previously trained in the presence of scent, can recall and make use of odour information when foraging in the presence of novel visual stimuli carrying a familiar scent. Of all the variables analysed, flower colour had the strongest effect on the foraging strategy. Bumblebees searching for blue flowers were more accurate, flew faster, followed more direct paths between flowers and needed less time to find them, than bumblebees searching for red flowers. In turn, training and the presence of odour helped bees to find inconspicuous (red) flowers. When bees foraged on red flowers, search time increased with flower size; but search time was independent of flower size when bees foraged on blue flowers. Previous experience with floral scent enhances the capacity of detection of a novel colour carrying a familiar scent, probably by elemental association influencing attention. PMID:28898287
Does the perception of moving eyes trigger reflexive visual orienting in autism?
Swettenham, John; Condie, Samantha; Campbell, Ruth; Milne, Elizabeth; Coleman, Mike
2003-01-01
Does movement of the eyes in one or another direction function as an automatic attentional cue to a location of interest? Two experiments explored the directional movement of the eyes in a full face for speed of detection of an aftercoming location target in young people with autism and in control participants. Our aim was to investigate whether a low-level perceptual impairment underlies the delay in gaze following characteristic of autism. The participants' task was to detect a target appearing on the left or right of the screen either 100 ms or 800 ms after a face cue appeared with eyes averting to the left or right. Despite instructions to ignore eye-movement in the face cue, people with autism and control adolescents were quicker to detect targets that had been preceded by an eye movement cue congruent with target location compared with targets preceded by an incongruent eye movement cue. The attention shifts are thought to be reflexive because the cue was to be ignored, and because the effect was found even when cue-target duration was short (100 ms). Because (experiment two) the effect persisted even when the face was inverted, it would seem that the direction of movement of eyes can provide a powerful (involuntary) cue to a location. PMID:12639330
No attentional capture from invisible flicker
Alais, David; Locke, Shannon M.; Leung, Johahn; Van der Burg, Erik
2016-01-01
We tested whether fast flicker can capture attention using eight flicker frequencies from 20–96 Hz, including several too high to be perceived (>50 Hz). Using a 480 Hz visual display rate, we presented smoothly sampled sinusoidal temporal modulations at: 20, 30, 40, 48, 60, 69, 80, and 96 Hz. We first established flicker detection rates for each frequency. Performance was at or near ceiling until 48 Hz and dropped sharply to chance level at 60 Hz and above. We then presented the same flickering stimuli as pre-cues in a visual search task containing five elements. Flicker location varied randomly and was therefore congruent with target location on 20% of trials. Comparing congruent and incongruent trials revealed a very strong congruency effect (faster search for cued targets) for all detectable frequencies (20–48 Hz) but no effect for faster flicker rates that were detected at chance. This pattern of results (obtained with brief flicker cues: 58 ms) was replicated for long flicker cues (1000 ms) intended to allow for entrainment to the flicker frequency. These results indicate that only visible flicker serves as an exogenous attentional cue and that flicker rates too high to be perceived are completely ineffective. PMID:27377759
NASA Astrophysics Data System (ADS)
Kim, Hyonchol; Terazono, Hideyuki; Hayashi, Masahito; Takei, Hiroyuki; Yasuda, Kenji
2012-06-01
A method of gold nanoparticle (Au NP) labeling with backscattered electron (BE) imaging of field emission scanning electron microscopy (FE-SEM) was applied for specific detection of target biomolecules on a cell surface. A single-stranded DNA aptamer, which specifically binds to the target molecule on a human acute lymphoblastic leukemia cell, was conjugated with a 20 nm Au NP and used as a probe to label its target molecule on the cell. The Au NP probe was incubated with the cell, and the interaction was confirmed using BE imaging of FE-SEM through direct counting of the number of Au NPs attached on the target cell surface. Specific Au NP-aptamer probes were observed on a single cell surface and their spatial distributions including submicron-order localizations were also clearly visualized, whereas the nonspecific aptamer probes were not observed on it. The aptamer probe can be potentially dislodged from the cell surface with treatment of nucleases, indicating that Au NP-conjugated aptamer probes can be used as sensitive and reversible probes to label target biomolecules on cells.
Li, Meng; Wang, Thomas D
2011-01-01
Summary Endoscopy has undergone explosive technological growth in over recent years, and with the emergence of targeted imaging, its truly transformative power and impact in medicine lies just over the horizon. Today, our ability to see inside the digestive tract with medical endoscopy is headed toward exciting crossroads. The existing paradigm of making diagnostic decisions based on observing structural changes and identifying anatomical landmarks may soon be replaced by visualizing functional properties and imaging molecular expression. In this novel approach, the presence of intracellular and cell surface targets unique to disease are identified and used to predict the likelihood of mucosal transformation and response to therapy. This strategy can result in the development of new methods for early cancer detection, personalized therapy, and chemoprevention. This targeted approach will require further development of molecular probes and endoscopic instruments, and will need support from the FDA for streamlined regulatory oversight. Overall, this molecular imaging modality promises to significantly broaden the capabilities of the gastroenterologist by providing a new approach to visualize the mucosa of the digestive tract in a manner that has never been seen before. PMID:19423025
Saiki, Jun
2002-01-01
Research on change blindness and transsaccadic memory revealed that a limited amount of information is retained across visual disruptions in visual working memory. It has been proposed that visual working memory can hold four to five coherent object representations. To investigate their maintenance and transformation in dynamic situations, I devised an experimental paradigm called multiple-object permanence tracking (MOPT) that measures memory for multiple feature-location bindings in dynamic situations. Observers were asked to detect any color switch in the middle of a regular rotation of a pattern with multiple colored disks behind an occluder. The color-switch detection performance dramatically declined as the pattern rotation velocity increased, and this effect of object motion was independent of the number of targets. The MOPT task with various shapes and colors showed that color-shape conjunctions are not available in the MOPT task. These results suggest that even completely predictable motion severely reduces our capacity of object representations, from four to only one or two.
Visual attentional bias for food in adolescents with binge-eating disorder.
Schmidt, Ricarda; Lüthold, Patrick; Kittel, Rebekka; Tetzlaff, Anne; Hilbert, Anja
2016-09-01
Evidence suggests that adults with binge-eating disorder (BED) are prone of having their attention interfered by food cues, and that food-related attentional biases are associated with calorie intake and eating disorder psychopathology. For adolescents with BED experimental evidence on attentional processing of food cues is lacking. Using eye-tracking and a visual search task, the present study examined visual orienting and disengagement processes of food in youth with BED. Eye-movement data and reaction times were recorded in 25 adolescents (12-20 years) with BED and 25 controls (CG) individually matched for sex, age, body mass index, and socio-economic status. During a free exploration paradigm, the BED group showed a greater gaze duration bias for food images than the CG. Groups did not differ in gaze direction biases. In a visual search task, the BED group showed a greater detection bias for food targets than the CG. Group differences were more pronounced for personally attractive than unattractive food images. Regarding clinical associations, only in the BED group the gaze duration bias for food was associated with increased hunger and lower body mass index, and the detection bias for food targets was associated with greater reward sensitivity. The study provided first evidence of an attentional bias to food in adolescents with BED. However, more research is needed for further specifying disengagement and orienting processes in adolescent BED, including overt and covert attention, and their prospective associations with binge-eating behaviors and associated psychopathology. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tactile cueing effects on performance in simulated aerial combat with high acceleration.
van Erp, Jan B F; Eriksson, Lars; Levin, Britta; Carlander, Otto; Veltman, J A; Vos, Wouter K
2007-12-01
Recent evidence indicates that vibrotactile displays can potentially reduce the risk of sensory and cognitive overload. Before these displays can be introduced in super agile aircraft, it must be ascertained that vibratory stimuli can be sensed and interpreted by pilots subjected to high G loads. Each of 9 pilots intercepted 32 targets in the Swedish Dynamic Flight Simulator. Targets were indicated on simulated standard Gripen visual displays. In addition, in half of the trials target direction was also displayed on a 60-element tactile torso display. Performance measures and subjective ratings were recorded. Each pilot pulled G peaks above +8 Gz. With tactile cueing present, mean reaction time was reduced from 1458 ms (SE = 54) to 1245 ms (SE = 88). Mean total chase time for targets that popped up behind the pilot's aircraft was reduced from 13 s (SE = 0.45) to 12 s (SE = 0.41). Pilots rated the tactile display favorably over the visual displays at target pop-up on the easiness of detecting a threat presence and on the clarity of initial position of the threats. This study is the first to show that tactile display information is perceivable and useful in hypergravity (up to +9 Gz). The results show that the tactile display can capture attention at threat pop-up and improve threat awareness for threats in the back, even in the presence of high-end visual displays. It is expected that the added value of tactile displays may further increase after formal training and in situations of unexpected target pop-up.
Fuggetta, Giorgio; Duke, Philip A
2017-05-01
The operation of attention on visible objects involves a sequence of cognitive processes. The current study firstly aimed to elucidate the effects of practice on neural mechanisms underlying attentional processes as measured with both behavioural and electrophysiological measures. Secondly, it aimed to identify any pattern in the relationship between Event-Related Potential (ERP) components which play a role in the operation of attention in vision. Twenty-seven participants took part in two recording sessions one week apart, performing an experimental paradigm which combined a match-to-sample task with a memory-guided efficient visual-search task within one trial sequence. Overall, practice decreased behavioural response times, increased accuracy, and modulated several ERP components that represent cognitive and neural processing stages. This neuromodulation through practice was also associated with an enhanced link between behavioural measures and ERP components and with an enhanced cortico-cortical interaction of functionally interconnected ERP components. Principal component analysis (PCA) of the ERP amplitude data revealed three components, having different rostro-caudal topographic representations. The first component included both the centro-parietal and parieto-occipital mismatch triggered negativity - involved in integration of visual representations of the target with current task-relevant representations stored in visual working memory - loaded with second negative posterior-bilateral (N2pb) component, involved in categorising specific pop-out target features. The second component comprised the amplitude of bilateral anterior P2 - related to detection of a specific pop-out feature - loaded with bilateral anterior N2, related to detection of conflicting features, and fronto-central mismatch triggered negativity. The third component included the parieto-occipital N1 - related to early neural responses to the stimulus array - which loaded with the second negative posterior-contralateral (N2pc) component, mediating the process of orienting and focusing covert attention on peripheral target features. We discussed these three components as representing different neurocognitive systems modulated with practice within which the input selection process operates. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
The Efficiency of a Visual Skills Training Program on Visual Search Performance
Krzepota, Justyna; Zwierko, Teresa; Puchalska-Niedbał, Lidia; Markiewicz, Mikołaj; Florkiewicz, Beata; Lubiński, Wojciech
2015-01-01
In this study, we conducted an experiment in which we analyzed the possibilities to develop visual skills by specifically targeted training of visual search. The aim of our study was to investigate whether, for how long and to what extent a training program for visual functions could improve visual search. The study involved 24 healthy students from the Szczecin University who were divided into two groups: experimental (12) and control (12). In addition to regular sports and recreational activities of the curriculum, the subjects of the experimental group also participated in 8-week long training with visual functions, 3 times a week for 45 min. The Signal Test of the Vienna Test System was performed four times: before entering the study, after first 4 weeks of the experiment, immediately after its completion and 4 weeks after the study terminated. The results of this experiment proved that an 8-week long perceptual training program significantly differentiated the plot of visual detecting time. For the visual detecting time changes, the first factor, Group, was significant as a main effect (F(1,22)=6.49, p<0.05) as well as the second factor, Training (F(3,66)=5.06, p<0.01). The interaction between the two factors (Group vs. Training) of perceptual training was F(3,66)=6.82 (p<0.001). Similarly, for the number of correct reactions, there was a main effect of a Group factor (F(1,22)=23.40, p<0.001), a main effect of a Training factor (F(3,66)=11.60, p<0.001) and a significant interaction between factors (Group vs. Training) (F(3,66)=10.33, p<0.001). Our study suggests that 8-week training of visual functions can improve visual search performance. PMID:26240666
Towner, Rheal A; Smith, Nataliya; Tesiram, Yasvir A; Abbott, Andrew; Saunders, Debbie; Blindauer, Rebecca; Herlea, Oana; Silasi-Mansat, Robert; Lupu, Florea
2007-01-01
The multifunctional growth factor scatter factor/hepatocyte growth factor and its tyrosine kinase receptor, c-MET, have been implicated in the genesis and malignant progression of numerous human malignancies, including hepatocellular carcinomas. The incidence of hepatocellular carcinomas in the United States has increased noticeably over the past two decades and is listed as the fifth major cancer in men worldwide. In this study, we used a choline-deficient l-amino acid (CDAA)-defined rat hepatocarcinogenesis model to visualize increased in vivo expression of the c-MET antigen in neoplastic lesion formation with the use of a super paramagnetic iron oxide (SPIO)-anti-c-MET molecularly targeted magnetic resonance imaging (MRI) contrast agent. SPIO-anti-c-MET was used for the first time to detect overexpression of c-MET in neoplastic nodules and tumors within the livers of CDAA-treated rats, as determined by a decrease in MRI signal intensity and a decrease in regional T(2) values. Specificity for the binding of the molecularly targeted anti-c-MET contrast agent was determined using rat hepatoma (H4-II-E-C3) cell cultures and immunofluorescence microscopic imaging of the targeting agents within neoplastic liver tissue 1 to 2 hours following intravenous administration of SPIO-anti-c-MET and MRI investigation. This method has the ability to visualize in vivo the overexpression of c-MET at early developmental stages of tumor formation.
NASA Astrophysics Data System (ADS)
Glickman, Randolph D.; Harrison, Joseph M.; Zwick, Harry; Longbotham, Harold G.; Ballentine, Charles S.; Pierce, Bennie
1996-04-01
Although visual function following retinal laser injuries has traditionally been assessed by measuring visual acuity, this measure only indicates the highest spatial frequency resolvable under high-contrast viewing conditions. Another visual psychophysical parameter is contrast sensitivity (CS), which measures the minimum contrast required for detection of targets over a range of spatial frequencies, and may evaluate visual mechanisms that do not directly subserve acuity. We used the visual evoked potential (VEP) to measure CS in a population of normal subjects and in patients with ophthalmic conditions affecting retinal function, including one patient with a laser injury in the macula. In this patient, the acuity had recovered from
NASA Astrophysics Data System (ADS)
Joshi, Bishnu P.; Miller, Sharon J.; Lee, Cameron; Gustad, Adam; Seibel, Eric J.; Wang, Thomas D.
2012-02-01
We demonstrate a multi-spectral scanning fiber endoscope (SFE) that collects fluorescence images in vivo from three target peptides that bind specifically to murine colonic adenomas. This ultrathin endoscope was demonstrated in a genetically engineered mouse model of spontaneous colorectal adenomas based on somatic Apc (adenomatous polyposis coli) gene inactivation. The SFE delivers excitation at 440, 532, 635 nm with <2 mW per channel. The target 7-mer peptides were conjugated to visible organic dyes, including 7-Diethylaminocoumarin-3-carboxylic acid (DEAC) (λex=432 nm, λem=472 nm), 5-Carboxytetramethylrhodamine (5-TAMRA) (λex=535 nm, λem=568 nm), and CF-633 (λex=633 nm, λem=650 nm). Target peptides were first validated using techniques of pfu counting, flow cytometry and previously established methods of fluorescence endoscopy. Peptides were applied individually or in combination and detected with fluorescence imaging. The ability to image multiple channels of fluorescence concurrently was successful for all three channels in vitro, while two channels were resolved simultaneously in vivo. Selective binding of the peptide was evident to adenomas and not to adjacent normal-appearing mucosa. Multispectral wide-field fluorescence detection using the SFE is achievable, and this technology has potential to advance early cancer detection and image-guided therapy in human patients by simultaneously visualizing multiple over expressed molecular targets unique to dysplasia.
Barua, Animesh; Yellapa, Aparna; Bahr, Janice M; Adur, Malavika K; Utterback, Chet W; Bitterman, Pincas; Basu, Sanjib; Sharma, Sameer; Abramowicz, Jacques S
2015-01-01
Limited resolution of transvaginal ultrasound (TVUS) scanning is a significant barrier to early detection of ovarian cancer (OVCA). Contrast agents have been suggested to improve the resolution of TVUS scanning. Emerging evidence suggests that expression of interleukin 16 (IL-16) by the tumor epithelium and microvessels increases in association with OVCA development and offers a potential target for early OVCA detection. The goal of this study was to examine the feasibility of IL-16-targeted contrast agents in enhancing the intensity of ultrasound imaging from ovarian tumors in hens, a model of spontaneous OVCA. Contrast agents were developed by conjugating biotinylated anti-IL-16 antibodies with streptavidin coated microbubbles. Enhancement of ultrasound signal intensity was determined before and after injection of contrast agents. Following scanning, ovarian tissues were processed for the detection of IL-16 expressing cells and microvessels. Compared with precontrast, contrast imaging enhanced ultrasound signal intensity significantly in OVCA hens at early (P < 0.05) and late stages (P < 0.001). Higher intensities of ultrasound signals in OVCA hens were associated with increased frequencies of IL-16 expressing cells and microvessels. These results suggest that IL-16-targeted contrast agents improve the visualization of ovarian tumors. The laying hen may be a suitable model to test new imaging agents and develop targeted anti-OVCA therapeutics.
Lidar detection algorithm for time and range anomalies.
Ben-David, Avishai; Davidson, Charles E; Vanderbeek, Richard G
2007-10-10
A new detection algorithm for lidar applications has been developed. The detection is based on hyperspectral anomaly detection that is implemented for time anomaly where the question "is a target (aerosol cloud) present at range R within time t(1) to t(2)" is addressed, and for range anomaly where the question "is a target present at time t within ranges R(1) and R(2)" is addressed. A detection score significantly different in magnitude from the detection scores for background measurements suggests that an anomaly (interpreted as the presence of a target signal in space/time) exists. The algorithm employs an option for a preprocessing stage where undesired oscillations and artifacts are filtered out with a low-rank orthogonal projection technique. The filtering technique adaptively removes the one over range-squared dependence of the background contribution of the lidar signal and also aids visualization of features in the data when the signal-to-noise ratio is low. A Gaussian-mixture probability model for two hypotheses (anomaly present or absent) is computed with an expectation-maximization algorithm to produce a detection threshold and probabilities of detection and false alarm. Results of the algorithm for CO(2) lidar measurements of bioaerosol clouds Bacillus atrophaeus (formerly known as Bacillus subtilis niger, BG) and Pantoea agglomerans, Pa (formerly known as Erwinia herbicola, Eh) are shown and discussed.
Fiber-optic microarray for simultaneous detection of multiple harmful algal bloom species.
Ahn, Soohyoun; Kulis, David M; Erdner, Deana L; Anderson, Donald M; Walt, David R
2006-09-01
Harmful algal blooms (HABs) are a serious threat to coastal resources, causing a variety of impacts on public health, regional economies, and ecosystems. Plankton analysis is a valuable component of many HAB monitoring and research programs, but the diversity of plankton poses a problem in discriminating toxic from nontoxic species using conventional detection methods. Here we describe a sensitive and specific sandwich hybridization assay that combines fiber-optic microarrays with oligonucleotide probes to detect and enumerate the HAB species Alexandrium fundyense, Alexandrium ostenfeldii, and Pseudo-nitzschia australis. Microarrays were prepared by loading oligonucleotide probe-coupled microspheres (diameter, 3 mum) onto the distal ends of chemically etched imaging fiber bundles. Hybridization of target rRNA from HAB cells to immobilized probes on the microspheres was visualized using Cy3-labeled secondary probes in a sandwich-type assay format. We applied these microarrays to the detection and enumeration of HAB cells in both cultured and field samples. Our study demonstrated a detection limit of approximately 5 cells for all three target organisms within 45 min, without a separate amplification step, in both sample types. We also developed a multiplexed microarray to detect the three HAB species simultaneously, which successfully detected the target organisms, alone and in combination, without cross-reactivity. Our study suggests that fiber-optic microarrays can be used for rapid and sensitive detection and potential enumeration of HAB species in the environment.
A neural model of the temporal dynamics of figure-ground segregation in motion perception.
Raudies, Florian; Neumann, Heiko
2010-03-01
How does the visual system manage to segment a visual scene into surfaces and objects and manage to attend to a target object? Based on psychological and physiological investigations, it has been proposed that the perceptual organization and segmentation of a scene is achieved by the processing at different levels of the visual cortical hierarchy. According to this, motion onset detection, motion-defined shape segregation, and target selection are accomplished by processes which bind together simple features into fragments of increasingly complex configurations at different levels in the processing hierarchy. As an alternative to this hierarchical processing hypothesis, it has been proposed that the processing stages for feature detection and segregation are reflected in different temporal episodes in the response patterns of individual neurons. Such temporal epochs have been observed in the activation pattern of neurons as low as in area V1. Here, we present a neural network model of motion detection, figure-ground segregation and attentive selection which explains these response patterns in an unifying framework. Based on known principles of functional architecture of the visual cortex, we propose that initial motion and motion boundaries are detected at different and hierarchically organized stages in the dorsal pathway. Visual shapes that are defined by boundaries, which were generated from juxtaposed opponent motions, are represented at different stages in the ventral pathway. Model areas in the different pathways interact through feedforward and modulating feedback, while mutual interactions enable the communication between motion and form representations. Selective attention is devoted to shape representations by sending modulating feedback signals from higher levels (working memory) to intermediate levels to enhance their responses. Areas in the motion and form pathway are coupled through top-down feedback with V1 cells at the bottom end of the hierarchy. We propose that the different temporal episodes in the response pattern of V1 cells, as recorded in recent experiments, reflect the strength of modulating feedback signals. This feedback results from the consolidated shape representations from coherent motion patterns and the attentive modulation of responses along the cortical hierarchy. The model makes testable predictions concerning the duration and delay of the temporal episodes of V1 cell responses as well as their response variations that were caused by modulating feedback signals. Copyright 2009 Elsevier Ltd. All rights reserved.
Directed area search using socio-biological vision algorithms and cognitive Bayesian reasoning
NASA Astrophysics Data System (ADS)
Medasani, S.; Owechko, Y.; Allen, D.; Lu, T. C.; Khosla, D.
2010-04-01
Volitional search systems that assist the analyst by searching for specific targets or objects such as vehicles, factories, airports, etc in wide area overhead imagery need to overcome multiple problems present in current manual and automatic approaches. These problems include finding targets hidden in terabytes of information, relatively few pixels on targets, long intervals between interesting regions, time consuming analysis requiring many analysts, no a priori representative examples or templates of interest, detecting multiple classes of objects, and the need for very high detection rates and very low false alarm rates. This paper describes a conceptual analyst-centric framework that utilizes existing technology modules to search and locate occurrences of targets of interest (e.g., buildings, mobile targets of military significance, factories, nuclear plants, etc.), from video imagery of large areas. Our framework takes simple queries from the analyst and finds the queried targets with relatively minimum interaction from the analyst. It uses a hybrid approach that combines biologically inspired bottom up attention, socio-biologically inspired object recognition for volitionally recognizing targets, and hierarchical Bayesian networks for modeling and representing the domain knowledge. This approach has the benefits of high accuracy, low false alarm rate and can handle both low-level visual information and high-level domain knowledge in a single framework. Such a system would be of immense help for search and rescue efforts, intelligence gathering, change detection systems, and other surveillance systems.
Dew inspired breathing-based detection of genetic point mutation visualized by naked eye
Xie, Liping; Wang, Tongzhou; Huang, Tianqi; Hou, Wei; Huang, Guoliang; Du, Yanan
2014-01-01
A novel label-free method based on breathing-induced vapor condensation was developed for detection of genetic point mutation. The dew-inspired detection was realized by integration of target-induced DNA ligation with rolling circle amplification (RCA). The vapor condensation induced by breathing transduced the RCA-amplified variances in DNA contents into visible contrast. The image could be recorded by a cell phone for further or even remote analysis. This green assay offers a naked-eye-reading method potentially applied for point-of-care liver cancer diagnosis in resource-limited regions. PMID:25199907
Dew inspired breathing-based detection of genetic point mutation visualized by naked eye
NASA Astrophysics Data System (ADS)
Xie, Liping; Wang, Tongzhou; Huang, Tianqi; Hou, Wei; Huang, Guoliang; Du, Yanan
2014-09-01
A novel label-free method based on breathing-induced vapor condensation was developed for detection of genetic point mutation. The dew-inspired detection was realized by integration of target-induced DNA ligation with rolling circle amplification (RCA). The vapor condensation induced by breathing transduced the RCA-amplified variances in DNA contents into visible contrast. The image could be recorded by a cell phone for further or even remote analysis. This green assay offers a naked-eye-reading method potentially applied for point-of-care liver cancer diagnosis in resource-limited regions.
Dew inspired breathing-based detection of genetic point mutation visualized by naked eye.
Xie, Liping; Wang, Tongzhou; Huang, Tianqi; Hou, Wei; Huang, Guoliang; Du, Yanan
2014-09-09
A novel label-free method based on breathing-induced vapor condensation was developed for detection of genetic point mutation. The dew-inspired detection was realized by integration of target-induced DNA ligation with rolling circle amplification (RCA). The vapor condensation induced by breathing transduced the RCA-amplified variances in DNA contents into visible contrast. The image could be recorded by a cell phone for further or even remote analysis. This green assay offers a naked-eye-reading method potentially applied for point-of-care liver cancer diagnosis in resource-limited regions.
NASA Astrophysics Data System (ADS)
Guo, Longhua; Xu, Shaohua; Ma, Xiaoming; Qiu, Bin; Lin, Zhenyu; Chen, Guonan
2016-09-01
Colorimetric enzyme-linked immunosorbent assay utilizing 3‧-3-5‧-5-tetramethylbenzidine(TMB) as the chromogenic substrate has been widely used in the hospital for the detection of all kinds of disease biomarkers. Herein, we demonstrate a strategy to change this single-color display into dual-color responses to improve the accuracy of visual inspection. Our investigation firstly reveals that oxidation state of 3‧-3-5‧-5-tetramethylbenzidine (TMB2+) can quantitatively etch gold nanoparticles. Therefore, the incorporation of gold nanoparticles into a commercial TMB-based ELISA kit could generate dual-color responses: the solution color varied gradually from wine red (absorption peak located at ~530 nm) to colorless, and then from colorless to yellow (absorption peak located at ~450 nm) with the increase amount of targets. These dual-color responses effectively improved the sensitivity as well as the accuracy of visual inspection. For example, the proposed dual-color plasmonic ELISA is demonstrated for the detection of prostate-specific antigen (PSA) in human serum with a visual limit of detection (LOD) as low as 0.0093 ng/mL.
Attentional Capture by Salient Color Singleton Distractors Is Modulated by Top-Down Dimensional Set
ERIC Educational Resources Information Center
Muller, Hermann J.; Geyer, Thomas; Zehetleitner, Michael; Krummenacher, Joseph
2009-01-01
Three experiments examined whether salient color singleton distractors automatically interfere with the detection singleton form targets in visual search (e.g., J. Theeuwes, 1992), or whether the degree of interference is top-down modulable. In Experiments 1 and 2, observers started with a pure block of trials, which contained either never a…
[Imaging Mass Spectrometry in Histopathologic Analysis].
Yamazaki, Fumiyoshi; Seto, Mitsutoshi
2015-04-01
Matrix-assisted laser desorption/ionization (MALDI)-imaging mass spectrometry (IMS) enables visualization of the distribution of a range of biomolecules by integrating biochemical information from mass spectrometry with positional information from microscopy. IMS identifies a target molecule. In addition, IMS enables global analysis of biomolecules containing unknown molecules by detecting the ratio of the molecular weight to electric charge without any target, which makes it possible to identify novel molecules. IMS generates data on the distribution of lipids and small molecules in tissues, which is difficult to visualize with either conventional counter-staining or immunohistochemistry. In this review, we firstly introduce the principle of imaging mass spectrometry and recent advances in the sample preparation method. Secondly, we present findings regarding biological samples, especially pathological ones. Finally, we discuss the limitations and problems of the IMS technique and clinical application, such as in drug development.
Is countershading camouflage robust to lighting change due to weather?
Penacchio, Olivier; Lovell, P George; Harris, Julie M
2018-02-01
Countershading is a pattern of coloration thought to have evolved in order to implement camouflage. By adopting a pattern of coloration that makes the surface facing towards the sun darker and the surface facing away from the sun lighter, the overall amount of light reflected off an animal can be made more uniformly bright. Countershading could hence contribute to visual camouflage by increasing background matching or reducing cues to shape. However, the usefulness of countershading is constrained by a particular pattern delivering 'optimal' camouflage only for very specific lighting conditions. In this study, we test the robustness of countershading camouflage to lighting change due to weather, using human participants as a 'generic' predator. In a simulated three-dimensional environment, we constructed an array of simple leaf-shaped items and a single ellipsoidal target 'prey'. We set these items in two light environments: strongly directional 'sunny' and more diffuse 'cloudy'. The target object was given the optimal pattern of countershading for one of these two environment types or displayed a uniform pattern. By measuring detection time and accuracy, we explored whether and how target detection depended on the match between the pattern of coloration on the target object and scene lighting. Detection times were longest when the countershading was appropriate to the illumination; incorrectly camouflaged targets were detected with a similar pattern of speed and accuracy to uniformly coloured targets. We conclude that structural changes in light environment, such as caused by differences in weather, do change the effectiveness of countershading camouflage.
Heenehan, Heather L; Tyne, Julian A; Bejder, Lars; Van Parijs, Sofie M; Johnston, David W
2016-07-01
Effective decision making to protect coastally associated dolphins relies on monitoring the presence of animals in areas that are critical to their survival. Hawaiian spinner dolphins forage at night and rest during the day in shallow bays. Due to their predictable presence, they are targeted by dolphin-tourism. In this study, comparisons of presence were made between passive acoustic monitoring (PAM) and vessel-based visual surveys in Hawaiian spinner dolphin resting bays. DSG-Ocean passive acoustic recording devices were deployed in four bays along the Kona Coast of Hawai'i Island between January 8, 2011 and August 30, 2012. The devices sampled at 80 kHz, making 30-s recordings every four minutes. Overall, dolphins were acoustically detected on 37.1% to 89.6% of recording days depending on the bay. Vessel-based visual surveys overlapped with the PAM surveys on 202 days across the four bays. No significant differences were found between visual and acoustic detections suggesting acoustic surveys can be used as a proxy for visual surveys. Given the need to monitor dolphin presence across sites, PAM is the most suitable and efficient tool for monitoring long-term presence/absence. Concomitant photo-identification surveys are necessary to address changes in abundance over time.