Sample records for object location detection

  1. Point pattern match-based change detection in a constellation of previously detected objects

    DOEpatents

    Paglieroni, David W.

    2016-06-07

    A method and system is provided that applies attribute- and topology-based change detection to objects that were detected on previous scans of a medium. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, detection strength, size, elongation, orientation, etc. The locations define a three-dimensional network topology forming a constellation of previously detected objects. The change detection system stores attributes of the previously detected objects in a constellation database. The change detection system detects changes by comparing the attributes and topological consistency of newly detected objects encountered during a new scan of the medium to previously detected objects in the constellation database. The change detection system may receive the attributes of the newly detected objects as the objects are detected by an object detection system in real time.

  2. Guidance of attention to objects and locations by long-term memory of natural scenes.

    PubMed

    Becker, Mark W; Rasmussen, Ian P

    2008-11-01

    Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered.

  3. Salient object detection method based on multiple semantic features

    NASA Astrophysics Data System (ADS)

    Wang, Chunyang; Yu, Chunyan; Song, Meiping; Wang, Yulei

    2018-04-01

    The existing salient object detection model can only detect the approximate location of salient object, or highlight the background, to resolve the above problem, a salient object detection method was proposed based on image semantic features. First of all, three novel salient features were presented in this paper, including object edge density feature (EF), object semantic feature based on the convex hull (CF) and object lightness contrast feature (LF). Secondly, the multiple salient features were trained with random detection windows. Thirdly, Naive Bayesian model was used for combine these features for salient detection. The results on public datasets showed that our method performed well, the location of salient object can be fixed and the salient object can be accurately detected and marked by the specific window.

  4. Guidance of Attention to Objects and Locations by Long-Term Memory of Natural Scenes

    ERIC Educational Resources Information Center

    Becker, Mark W.; Rasmussen, Ian P.

    2008-01-01

    Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants…

  5. Assisting People with Disabilities in Actively Performing Designated Occupational Activities with Battery-Free Wireless Mice to Control Environmental Stimulation

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang

    2013-01-01

    The latest researches use software technology (OLDP, object location detection programs) to turn a commercial high-technology product, i.e. a battery-free wireless mouse, into a high performance/precise object location detector to detect whether or not an object has been placed in the designated location. The preferred environmental stimulation is…

  6. The effects of changes in object location on object identity detection: A simultaneous EEG-fMRI study.

    PubMed

    Yang, Ping; Fan, Chenggui; Wang, Min; Fogelson, Noa; Li, Ling

    2017-08-15

    Object identity and location are bound together to form a unique integration that is maintained and processed in visual working memory (VWM). Changes in task-irrelevant object location have been shown to impair the retrieval of memorial representations and the detection of object identity changes. However, the neural correlates of this cognitive process remain largely unknown. In the present study, we aim to investigate the underlying brain activation during object color change detection and the modulatory effects of changes in object location and VWM load. To this end we used simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) recordings, which can reveal the neural activity with both high temporal and high spatial resolution. Subjects responded faster and with greater accuracy in the repeated compared to the changed object location condition, when a higher VWM load was utilized. These results support the spatial congruency advantage theory and suggest that it is more pronounced with higher VWM load. Furthermore, the spatial congruency effect was associated with larger posterior N1 activity, greater activation of the right inferior frontal gyrus (IFG) and less suppression of the right supramarginal gyrus (SMG), when object location was repeated compared to when it was changed. The ERP-fMRI integrative analysis demonstrated that the object location discrimination-related N1 component is generated in the right SMG. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Sex Differences in Object Location Memory: The Female Advantage of Immediate Detection of Changes

    ERIC Educational Resources Information Center

    Honda, Akio; Nihei, Yoshiaki

    2009-01-01

    Object location memory has been considered the only spatial ability in which females display an advantage over males. We examined sex differences in long-term object location memory. After participants studied an array of objects, they were asked to recall the locations of these objects three minutes later or one week later. Results showed a…

  8. Determining root correspondence between previously and newly detected objects

    DOEpatents

    Paglieroni, David W.; Beer, N Reginald

    2014-06-17

    A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.

  9. Ventral and Dorsal Visual Stream Contributions to the Perception of Object Shape and Object Location

    PubMed Central

    Zachariou, Valentinos; Klatzky, Roberta; Behrmann, Marlene

    2017-01-01

    Growing evidence suggests that the functional specialization of the two cortical visual pathways may not be as distinct as originally proposed. Here, we explore possible contributions of the dorsal “where/how” visual stream to shape perception and, conversely, contributions of the ventral “what” visual stream to location perception in human adults. Participants performed a shape detection task and a location detection task while undergoing fMRI. For shape detection, comparable BOLD activation in the ventral and dorsal visual streams was observed, and the magnitude of this activation was correlated with behavioral performance. For location detection, cortical activation was significantly stronger in the dorsal than ventral visual pathway and did not correlate with the behavioral outcome. This asymmetry in cortical profile across tasks is particularly noteworthy given that the visual input was identical and that the tasks were matched for difficulty in performance. We confirmed the asymmetry in a subsequent psychophysical experiment in which participants detected changes in either object location or shape, while ignoring the other, task-irrelevant dimension. Detection of a location change was slowed by an irrelevant shape change matched for difficulty, but the reverse did not hold. We conclude that both ventral and dorsal visual streams contribute to shape perception, but that location processing appears to be essentially a function of the dorsal visual pathway. PMID:24001005

  10. Rhythmic Sampling within and between Objects despite Sustained Attention at a Cued Location

    PubMed Central

    Fiebelkorn, Ian C.; Saalmann, Yuri B.; Kastner, Sabine

    2013-01-01

    SUMMARY The brain directs its limited processing resources through various selection mechanisms, broadly referred to as attention. The present study investigated the temporal dynamics of two such selection mechanisms: space- and object-based selection. Previous evidence has demonstrated that preferential processing resulting from a spatial cue (i.e., space-based selection) spreads to uncued locations, if those locations are part of the same object (i.e., resulting in object-based selection). But little is known about the relationship between these fundamental selection mechanisms. Here, we used human behavioral data to determine how space- and object-based selection simultaneously evolve under conditions that promote sustained attention at a cued location, varying the cue-to-target interval from 300—1100 ms. We tracked visual-target detection at a cued location (i.e., space-based selection), at an uncued location that was part of the same object (i.e., object-based selection), and at an uncued location that was part of a different object (i.e., in the absence of space- and object-based selection). The data demonstrate that even under static conditions, there is a moment-to-moment reweighting of attentional priorities based on object properties. This reweighting is revealed through rhythmic patterns of visual-target detection both within (at 8 Hz) and between (at 4 Hz) objects. PMID:24316204

  11. [The role of sustained attention in shift-contingent change blindness].

    PubMed

    Nakashima, Ryoichi; Yokosawa, Kazuhiko

    2015-02-01

    Previous studies of change blindness have examined the effect of temporal factors (e.g., blank duration) on attention in change detection. This study examined the effect of spatial factors (i.e., whether the locations of original and changed objects are the same or different) on attention in change detection, using a shift-contingent change blindness task. We used a flicker paradigm in which the location of a to-be-judged target image was manipulated (shift, no-shift). In shift conditions, the image of an array of objects was spatially shifted so that all objects appeared in new locations; in no-shift conditions, all object images of an array appeared at the same location. The presence of visual stimuli (dots) in the blank display between the two images was.manipulated (dot, no-dot) under the assumption that abrupt onsets of these stimuli would capture attention. Results indicated that change detection performance was improved by exogenous attentional capture in the shift condition. Thus, we suggest that attention can play an important role in change detection during shift-contingent change blindness.

  12. Attribute and topology based change detection in a constellation of previously detected objects

    DOEpatents

    Paglieroni, David W.; Beer, Reginald N.

    2016-01-19

    A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.

  13. Small target detection using objectness and saliency

    NASA Astrophysics Data System (ADS)

    Zhang, Naiwen; Xiao, Yang; Fang, Zhiwen; Yang, Jian; Wang, Li; Li, Tao

    2017-10-01

    We are motived by the need for generic object detection algorithm which achieves high recall for small targets in complex scenes with acceptable computational efficiency. We propose a novel object detection algorithm, which has high localization quality with acceptable computational cost. Firstly, we obtain the objectness map as in BING[1] and use NMS to get the top N points. Then, k-means algorithm is used to cluster them into K classes according to their location. We set the center points of the K classes as seed points. For each seed point, an object potential region is extracted. Finally, a fast salient object detection algorithm[2] is applied to the object potential regions to highlight objectlike pixels, and a series of efficient post-processing operations are proposed to locate the targets. Our method runs at 5 FPS on 1000*1000 images, and significantly outperforms previous methods on small targets in cluttered background.

  14. Reconciling change blindness with long-term memory for objects.

    PubMed

    Wood, Katherine; Simons, Daniel J

    2017-02-01

    How can we reconcile remarkably precise long-term memory for thousands of images with failures to detect changes to similar images? We explored whether people can use detailed, long-term memory to improve change detection performance. Subjects studied a set of images of objects and then performed recognition and change detection tasks with those images. Recognition memory performance exceeded change detection performance, even when a single familiar object in the postchange display consistently indicated the change location. In fact, participants were no better when a familiar object predicted the change location than when the displays consisted of unfamiliar objects. When given an explicit strategy to search for a familiar object as a way to improve performance on the change detection task, they performed no better than in a 6-alternative recognition memory task. Subjects only benefited from the presence of familiar objects in the change detection task when they had more time to view the prechange array before it switched. Once the cost to using the change detection information decreased, subjects made use of it in conjunction with memory to boost performance on the familiar-item change detection task. This suggests that even useful information will go unused if it is sufficiently difficult to extract.

  15. High accuracy position method based on computer vision and error analysis

    NASA Astrophysics Data System (ADS)

    Chen, Shihao; Shi, Zhongke

    2003-09-01

    The study of high accuracy position system is becoming the hotspot in the field of autocontrol. And positioning is one of the most researched tasks in vision system. So we decide to solve the object locating by using the image processing method. This paper describes a new method of high accuracy positioning method through vision system. In the proposed method, an edge-detection filter is designed for a certain running condition. Here, the filter contains two mainly parts: one is image-processing module, this module is to implement edge detection, it contains of multi-level threshold self-adapting segmentation, edge-detection and edge filter; the other one is object-locating module, it is to point out the location of each object in high accurate, and it is made up of medium-filtering and curve-fitting. This paper gives some analysis error for the method to prove the feasibility of vision in position detecting. Finally, to verify the availability of the method, an example of positioning worktable, which is using the proposed method, is given at the end of the paper. Results show that the method can accurately detect the position of measured object and identify object attitude.

  16. Testing visual short-term memory of pigeons (Columba livia) and a rhesus monkey (Macaca mulatta) with a location change detection task.

    PubMed

    Leising, Kenneth J; Elmore, L Caitlin; Rivera, Jacquelyne J; Magnotti, John F; Katz, Jeffrey S; Wright, Anthony A

    2013-09-01

    Change detection is commonly used to assess capacity (number of objects) of human visual short-term memory (VSTM). Comparisons with the performance of non-human animals completing similar tasks have shown similarities and differences in object-based VSTM, which is only one aspect ("what") of memory. Another important aspect of memory, which has received less attention, is spatial short-term memory for "where" an object is in space. In this article, we show for the first time that a monkey and pigeons can be accurately trained to identify location changes, much as humans do, in change detection tasks similar to those used to test object capacity of VSTM. The subject's task was to identify (touch/peck) an item that changed location across a brief delay. Both the monkey and pigeons showed transfer to delays longer than the training delay, to greater and smaller distance changes than in training, and to novel colors. These results are the first to demonstrate location-change detection in any non-human species and encourage comparative investigations into the nature of spatial and visual short-term memory.

  17. Identification and location of catenary insulator in complex background based on machine vision

    NASA Astrophysics Data System (ADS)

    Yao, Xiaotong; Pan, Yingli; Liu, Li; Cheng, Xiao

    2018-04-01

    It is an important premise to locate insulator precisely for fault detection. Current location algorithms for insulator under catenary checking images are not accurate, a target recognition and localization method based on binocular vision combined with SURF features is proposed. First of all, because of the location of the insulator in complex environment, using SURF features to achieve the coarse positioning of target recognition; then Using binocular vision principle to calculate the 3D coordinates of the object which has been coarsely located, realization of target object recognition and fine location; Finally, Finally, the key is to preserve the 3D coordinate of the object's center of mass, transfer to the inspection robot to control the detection position of the robot. Experimental results demonstrate that the proposed method has better recognition efficiency and accuracy, can successfully identify the target and has a define application value.

  18. Ferromagnetic Objects Magnetovision Detection System.

    PubMed

    Nowicki, Michał; Szewczyk, Roman

    2013-12-02

    This paper presents the application of a weak magnetic fields magnetovision scanning system for detection of dangerous ferromagnetic objects. A measurement system was developed and built to study the magnetic field vector distributions. The measurements of the Earth's field distortions caused by various ferromagnetic objects were carried out. The ability for passive detection of hidden or buried dangerous objects and the determination of their location was demonstrated.

  19. Microwave Technique for Detecting and Locating Concealed Weapons

    DOT National Transportation Integrated Search

    1971-12-01

    The subject of this report is the evaluation of a microwave technique for detecting and locating weapons concealed under clothing. The principal features of this technique are: persons subjected to search are not exposed to 'objectional' microwave ra...

  20. Method and apparatus for determining the coordinates of an object

    DOEpatents

    Pedersen, Paul S.

    2002-01-01

    A simplified method and related apparatus are described for determining the location of points on the surface of an object by varying, in accordance with a unique sequence, the intensity of each illuminated pixel directed to the object surface, and detecting at known detector pixel locations the intensity sequence of reflected illumination from the surface of the object whereby the identity and location of the originating illuminated pixel can be determined. The coordinates of points on the surface of the object are then determined by conventional triangulation methods.

  1. Systematic evaluation of deep learning based detection frameworks for aerial imagery

    NASA Astrophysics Data System (ADS)

    Sommer, Lars; Steinmann, Lucas; Schumann, Arne; Beyerer, Jürgen

    2018-04-01

    Object detection in aerial imagery is crucial for many applications in the civil and military domain. In recent years, deep learning based object detection frameworks significantly outperformed conventional approaches based on hand-crafted features on several datasets. However, these detection frameworks are generally designed and optimized for common benchmark datasets, which considerably differ from aerial imagery especially in object sizes. As already demonstrated for Faster R-CNN, several adaptations are necessary to account for these differences. In this work, we adapt several state-of-the-art detection frameworks including Faster R-CNN, R-FCN, and Single Shot MultiBox Detector (SSD) to aerial imagery. We discuss adaptations that mainly improve the detection accuracy of all frameworks in detail. As the output of deeper convolutional layers comprise more semantic information, these layers are generally used in detection frameworks as feature map to locate and classify objects. However, the resolution of these feature maps is insufficient for handling small object instances, which results in an inaccurate localization or incorrect classification of small objects. Furthermore, state-of-the-art detection frameworks perform bounding box regression to predict the exact object location. Therefore, so called anchor or default boxes are used as reference. We demonstrate how an appropriate choice of anchor box sizes can considerably improve detection performance. Furthermore, we evaluate the impact of the performed adaptations on two publicly available datasets to account for various ground sampling distances or differing backgrounds. The presented adaptations can be used as guideline for further datasets or detection frameworks.

  2. Boundary and object detection in real world images. [by means of algorithms

    NASA Technical Reports Server (NTRS)

    Yakimovsky, Y.

    1974-01-01

    A solution to the problem of automatic location of objects in digital pictures by computer is presented. A self-scaling local edge detector which can be applied in parallel on a picture is described. Clustering algorithms and boundary following algorithms which are sequential in nature process the edge data to locate images of objects.

  3. Sexual orientation and spatial position effects on selective forms of object location memory.

    PubMed

    Rahman, Qazi; Newland, Cherie; Smyth, Beatrice Mary

    2011-04-01

    Prior research has demonstrated robust sex and sexual orientation-related differences in object location memory in humans. Here we show that this sexual variation may depend on the spatial position of target objects and the task-specific nature of the spatial array. We tested the recovery of object locations in three object arrays (object exchanges, object shifts, and novel objects) relative to veridical center (left compared to right side of the arrays) in a sample of 35 heterosexual men, 35 heterosexual women, and 35 homosexual men. Relative to heterosexual men, heterosexual women showed better location recovery in the right side of the array during object exchanges and homosexual men performed better in the right side during novel objects. However, the difference between heterosexual and homosexual men disappeared after controlling for IQ. Heterosexual women and homosexual men did not differ significantly from each other in location change detection with respect to task or side of array. These data suggest that visual space biases in processing categorical spatial positions may enhance aspects of object location memory in heterosexual women. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory

    PubMed Central

    Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.

    2013-01-01

    Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773

  5. Security Applications Of Computer Motion Detection

    NASA Astrophysics Data System (ADS)

    Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry

    1987-05-01

    An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.

  6. A Vision-Based System for Object Identification and Information Retrieval in a Smart Home

    NASA Astrophysics Data System (ADS)

    Grech, Raphael; Monekosso, Dorothy; de Jager, Deon; Remagnino, Paolo

    This paper describes a hand held device developed to assist people to locate and retrieve information about objects in a home. The system developed is a standalone device to assist persons with memory impairments such as people suffering from Alzheimer's disease. A second application is object detection and localization for a mobile robot operating in an ambient assisted living environment. The device relies on computer vision techniques to locate a tagged object situated in the environment. The tag is a 2D color printed pattern with a detection range and a field of view such that the user may point from a distance of over 1 meter.

  7. Acoustic detection and monitoring for transportation infrastructure security.

    DOT National Transportation Integrated Search

    2009-09-01

    Acoustical methods have been extensively used to locate, identify, and track objects underwater. Some of these applications include detecting and tracking submarines, marine mammal detection and identification, detection of mines and ship wrecks and ...

  8. Location perception: the X-Files parable.

    PubMed

    Prinzmetal, William

    2005-01-01

    Three aspects of visual object location were investigated: (1) how the visual system integrates information for locating objects, (2) how attention operates to affect location perception, and (3) how the visual system deals with locating an object when multiple objects are present. The theories were described in terms of a parable (the X-Files parable). Then, computer simulations were developed. Finally, predictions derived from the simulations were tested. In the scenario described in the parable, we ask how a system of detectors might locate an alien spaceship, how attention might be implemented in such a spaceship detection system, and how the presence of one spaceship might influence the location perception of another alien spaceship. Experiment 1 demonstrated that location information is integrated with a spatial average rule. In Experiment 2, this rule was applied to a more-samples theory of attention. Experiment 3 demonstrated how the integration rule could account for various visual illusions.

  9. Vision-based obstacle avoidance

    DOEpatents

    Galbraith, John [Los Alamos, NM

    2006-07-18

    A method for allowing a robot to avoid objects along a programmed path: first, a field of view for an electronic imager of the robot is established along a path where the electronic imager obtains the object location information within the field of view; second, a population coded control signal is then derived from the object location information and is transmitted to the robot; finally, the robot then responds to the control signal and avoids the detected object.

  10. Automatic Railway Traffic Object Detection System Using Feature Fusion Refine Neural Network under Shunting Mode.

    PubMed

    Ye, Tao; Wang, Baocheng; Song, Ping; Li, Juan

    2018-06-12

    Many accidents happen under shunting mode when the speed of a train is below 45 km/h. In this mode, train attendants observe the railway condition ahead using the traditional manual method and tell the observation results to the driver in order to avoid danger. To address this problem, an automatic object detection system based on convolutional neural network (CNN) is proposed to detect objects ahead in shunting mode, which is called Feature Fusion Refine neural network (FR-Net). It consists of three connected modules, i.e., the depthwise-pointwise convolution, the coarse detection module, and the object detection module. Depth-wise-pointwise convolutions are used to improve the detection in real time. The coarse detection module coarsely refine the locations and sizes of prior anchors to provide better initialization for the subsequent module and also reduces search space for the classification, whereas the object detection module aims to regress accurate object locations and predict the class labels for the prior anchors. The experimental results on the railway traffic dataset show that FR-Net achieves 0.8953 mAP with 72.3 FPS performance on a machine with a GeForce GTX1080Ti with the input size of 320 × 320 pixels. The results imply that FR-Net takes a good tradeoff both on effectiveness and real time performance. The proposed method can meet the needs of practical application in shunting mode.

  11. Shifting attention in viewer- and object-based reference frames after unilateral brain injury.

    PubMed

    List, Alexandra; Landau, Ayelet N; Brooks, Joseph L; Flevaris, Anastasia V; Fortenbaugh, Francesca C; Esterman, Michael; Van Vleet, Thomas M; Albrecht, Alice R; Alvarez, Bryan D; Robertson, Lynn C; Schendel, Krista

    2011-06-01

    The aims of the present study were to investigate the respective roles that object- and viewer-based reference frames play in reorienting visual attention, and to assess their influence after unilateral brain injury. To do so, we studied 16 right hemisphere injured (RHI) and 13 left hemisphere injured (LHI) patients. We used a cueing design that manipulates the location of cues and targets relative to a display comprised of two rectangles (i.e., objects). Unlike previous studies with patients, we presented all cues at midline rather than in the left or right visual fields. Thus, in the critical conditions in which targets were presented laterally, reorienting of attention was always from a midline cue. Performance was measured for lateralized target detection as a function of viewer-based (contra- and ipsilesional sides) and object-based (requiring reorienting within or between objects) reference frames. As expected, contralesional detection was slower than ipsilesional detection for the patients. More importantly, objects influenced target detection differently in the contralesional and ipsilesional fields. Contralesionally, reorienting to a target within the cued object took longer than reorienting to a target in the same location but in the uncued object. This finding is consistent with object-based neglect. Ipsilesionally, the means were in the opposite direction. Furthermore, no significant difference was found in object-based influences between the patient groups (RHI vs. LHI). These findings are discussed in the context of reference frames used in reorienting attention for target detection. Published by Elsevier Ltd.

  12. Enclosure Transform for Interest Point Detection From Speckle Imagery.

    PubMed

    Yongjian Yu; Jue Wang

    2017-03-01

    We present a fast enclosure transform (ET) to localize complex objects of interest from speckle imagery. This approach explores the spatial confinement on regional features from a sparse image feature representation. Unrelated, broken ridge features surrounding an object are organized collaboratively, giving rise to the enclosureness of the object. Three enclosure likelihood measures are constructed, consisting of the enclosure force, potential energy, and encloser count. In the transform domain, the local maxima manifest the locations of objects of interest, for which only the intrinsic dimension is known a priori. The discrete ET algorithm is computationally efficient, being on the order of O(MN) using N measuring distances across an image of M ridge pixels. It involves easy and few parameter settings. We demonstrate and assess the performance of ET on the automatic detection of the prostate locations from supra-pubic ultrasound images. ET yields superior results in terms of positive detection rate, accuracy and coverage.

  13. What's the object of object working memory in infancy? Unraveling 'what' and 'how many'.

    PubMed

    Kibbe, Melissa M; Leslie, Alan M

    2013-06-01

    Infants have a bandwidth-limited object working memory (WM) that can both individuate and identify objects in a scene, (answering 'how many?' or 'what?', respectively). Studies of infants' WM for objects have typically looked for limits on either 'how many' or 'what', yielding different estimates of infant capacity. Infants can keep track of about three individuals (regardless of identity), but appear to be much more limited in the number of specific identities they can recall. Why are the limits on 'how many' and 'what' different? Are the limits entirely separate, do they interact, or are they simply two different aspects of the same underlying limit? We sought to unravel these limits in a series of experiments which tested 9- and 12-month-olds' WM for object identities under varying degrees of difficulty. In a violation-of-expectation looking-time task, we hid objects one at a time behind separate screens, and then probed infants' WM for the shape identity of the penultimate object in the sequence. We manipulated the difficulty of the task by varying both the number of objects in hiding locations and the number of means by which infants could detect a shape change to the probed object. We found that 9-month-olds' WM for identities was limited by the number of hiding locations: when the probed object was one of two objects hidden (one in each of two locations), 9-month-olds succeeded, and they did so even though they were given only one means to detect the change. However, when the probed object was one of three objects hidden (one in each of three locations), they failed, even when they were given two means to detect the shape change. Twelve-month-olds, by contrast, succeeded at the most difficult task level. Results show that WM for 'how many' and for 'what' are not entirely separate. Individuated objects are tracked relatively cheaply. Maintaining bindings between indexed objects and identifying featural information incurs a greater attentional/memory cost. This cost reduces with development. We conclude that infant WM supports a small number of featureless object representations that index the current locations of objects. These can have featural information bound to them, but only at substantial cost. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Resonant frequency method for bearing ball inspection

    DOEpatents

    Khuri-Yakub, B. T.; Hsieh, Chung-Kao

    1993-01-01

    The present invention provides for an inspection system and method for detecting defects in test objects which includes means for generating expansion inducing energy focused upon the test object at a first location, such expansion being allowed to contract, thereby causing pressure wave within and on the surface of the test object. Such expansion inducing energy may be provided by, for example, a laser beam or ultrasonic energy. At a second location, the amplitudes and phases of the acoustic waves are detected and the resonant frequencies' quality factors are calculated and compared to predetermined quality factor data, such comparison providing information of whether the test object contains a defect. The inspection system and method also includes means for mounting the bearing ball for inspection.

  15. Resonant frequency method for bearing ball inspection

    DOEpatents

    Khuri-Yakub, B.T.; Chungkao Hsieh.

    1993-11-02

    The present invention provides for an inspection system and method for detecting defects in test objects which includes means for generating expansion inducing energy focused upon the test object at a first location, such expansion being allowed to contract, thereby causing pressure wave within and on the surface of the test object. Such expansion inducing energy may be provided by, for example, a laser beam or ultrasonic energy. At a second location, the amplitudes and phases of the acoustic waves are detected and the resonant frequencies' quality factors are calculated and compared to predetermined quality factor data, such comparison providing information of whether the test object contains a defect. The inspection system and method also includes means for mounting the bearing ball for inspection. 5 figures.

  16. Nondestructive Concrete Characterization System

    DTIC Science & Technology

    2013-05-20

    Army, locate steel reinforcing bars, and identify the presence of steel fiber reinforcement . The thickness of all sides of each concrete block was...concrete compressive strength within the accuracy required by the U.S. Army, locate steel reinforcing bars, and identify the presence of steel fiber ...tolerance of ±3 ksi. 3. Detect the presence of fiber reinforcement . 4. Locate and detect the presence and density (e.g. spacing) of metallic objects

  17. If it's not there, where is it? Locating illusory conjunctions.

    PubMed

    Hazeltine, R E; Prinzmetal, W; Elliott, W

    1997-02-01

    There is evidence that complex objects are decomposed by the visual system into features, such as shape and color. Consistent with this theory is the phenomenon of illusory conjunctions, which occur when features are incorrectly combined to form an illusory object. We analyzed the perceived location of illusory conjunctions to study the roles of color and shape in the location of visual objects. In Experiments 1 and 2, participants located illusory conjunctions about halfway between the veridical locations of the component features. Experiment 3 showed that the distribution of perceived locations was not the mixture of two distributions centered at the 2 feature locations. Experiment 4 replicated these results with an identification task rather than a detection task. We concluded that the locations of illusory conjunctions were not arbitrary but were determined by both constituent shape and color.

  18. Seismic Techniques for Subsurface Voids Detection

    NASA Astrophysics Data System (ADS)

    Gritto, Roland; Korneev, Valeri; Elobaid Elnaiem, Ali; Mohamed, Fathelrahman; Sadooni, Fadhil

    2016-04-01

    A major hazards in Qatar is the presence of karst, which is ubiquitous throughout the country including depressions, sinkholes, and caves. Causes for the development of karst include faulting and fracturing where fluids find pathways through limestone and dissolve the host rock to form caverns. Of particular concern in rapidly growing metropolitan areas that expand in heretofore unexplored regions are the collapse of such caverns. Because Qatar has seen a recent boom in construction, including the planning and development of complete new sub-sections of metropolitan areas, the development areas need to be investigated for the presence of karst to determine their suitability for the planned project. In this paper, we present the results of a study to demonstrate a variety of seismic techniques to detect the presence of a karst analog in form of a vertical water-collection shaft located on the campus of Qatar University, Doha, Qatar. Seismic waves are well suited for karst detection and characterization. Voids represent high-contrast seismic objects that exhibit strong responses due to incident seismic waves. However, the complex geometry of karst, including shape and size, makes their imaging nontrivial. While karst detection can be reduced to the simple problem of detecting an anomaly, karst characterization can be complicated by the 3D nature of the problem of unknown scale, where irregular surfaces can generate diffracted waves of different kind. In our presentation we employ a variety of seismic techniques to demonstrate the detection and characterization of a vertical water collection shaft analyzing the phase, amplitude and spectral information of seismic waves that have been scattered by the object. We used the reduction in seismic wave amplitudes and the delay in phase arrival times in the geometrical shadow of the vertical shaft to independently detect and locate the object in space. Additionally, we use narrow band-pass filtered data combining two orthogonal transmission surveys to detect and locate the object. Furthermore, we showed that ambient noise recordings may generate data with sufficient signal-to-noise ratio to successfully detect and locate subsurface voids. Being able to use ambient noise recordings would eliminate the need to employ active seismic sources that are time consuming and more expensive to operate.

  19. Binding Objects to Locations: The Relationship between Object Files and Visual Working Memory

    ERIC Educational Resources Information Center

    Hollingworth, Andrew; Rasmussen, Ian P.

    2010-01-01

    The relationship between object files and visual working memory (VWM) was investigated in a new paradigm combining features of traditional VWM experiments (color change detection) and object-file experiments (memory for the properties of moving objects). Object-file theory was found to account for a key component of object-position binding in VWM:…

  20. Non-Verbal Communicative Signals Modulate Attention to Object Properties

    PubMed Central

    Marno, Hanna; Davelaar, Eddy J.; Csibra, Gergely

    2015-01-01

    We investigated whether the social context in which an object is experienced influences the encoding of its various properties. We hypothesized that when an object is observed in a communicative context, its intrinsic features (such as its shape) would be preferentially encoded at the expense of its extrinsic properties (such as its location). In the three experiments, participants were presented with brief movies, in which an actor either performed a non-communicative action towards one of five different meaningless objects, or communicatively pointed at one of them. A subsequent static image, in which either the location or the identity of an object changed, tested participants’ attention to these two kinds of information. Throughout the three experiments we found that communicative cues tended to facilitate identity change detection and to impede location change detection, while in the non-communicative contexts we did not find such a bidirectional effect of cueing. The results also revealed that the effect of the communicative context was due to the presence of ostensive-communicative signals before the object-directed action, and not to the pointing gesture per se. We propose that such an attentional bias forms an inherent part of human communication, and function to facilitate social learning by communication. PMID:24294871

  1. Nonverbal communicative signals modulate attention to object properties.

    PubMed

    Marno, Hanna; Davelaar, Eddy J; Csibra, Gergely

    2014-04-01

    We investigated whether the social context in which an object is experienced influences the encoding of its various properties. We hypothesized that when an object is observed in a communicative context, its intrinsic features (such as its shape) would be preferentially encoded at the expense of its extrinsic properties (such as its location). In 3 experiments, participants were presented with brief movies, in which an actor either performed a noncommunicative action toward 1 of 5 different meaningless objects, or communicatively pointed at 1 of them. A subsequent static image, in which either the location or the identity of an object changed, tested participants' attention to these 2 kinds of information. Throughout the 3 experiments we found that communicative cues tended to facilitate identity change detection and to impede location change detection, whereas in the noncommunicative contexts we did not find such a bidirectional effect of cueing. The results also revealed that the effect of the communicative context was a result the presence of ostensive-communicative signals before the object-directed action, and not to the pointing gesture per se. We propose that such an attentional bias forms an inherent part of human communication, and function to facilitate social learning by communication.

  2. An Object Location Detector Enabling People with Developmental Disabilities to Control Environmental Stimulation through Simple Occupational Activities with Battery-Free Wireless Mice

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang

    2011-01-01

    This study assessed whether two persons with developmental disabilities would be able to actively perform simple occupational activities by controlling their favorite environmental stimulation using battery-free wireless mice with a newly developed object location detection program (OLDP, i.e., a new software program turning a battery-free…

  3. Assisting Patients with Disabilities to Actively Perform Occupational Activities Using Battery-Free Wireless Mice to Control Environmental Stimulation

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Wang, Shu-Hui; Chang, Man-Ling; Kung, Ssu-Yun

    2012-01-01

    The latest studies have adopted software technology to turn the battery-free wireless mouse into a high performance object location detector using a newly developed object location detection program (OLDP). This study extended OLDP functionality to assess whether two patients recovering from cerebral vascular accidents would be able to actively…

  4. High-Resolution Seismic Imaging of Near-Surface Voids

    NASA Astrophysics Data System (ADS)

    Gritto, R.; Korneev, V. A.; Elobaid, E. A.; Mohamed, F.; Sadooni, F.

    2017-12-01

    A major hazard in Qatar is the presence of karst, which is ubiquitous throughout the country including depressions, sinkholes, and caves. Causes for the development of karst include faulting and fracturing where fluids find pathways through limestone and dissolve the host rock to form caverns. Of particular concern in rapidly growing metropolitan areas that expand in heretofore unexplored regions are the collapse of such caverns. Because Qatar has seen a recent boom in construction, including the planning and development of complete new sub-sections of metropolitan areas, the development areas need to be investigated for the presence of karst to determine their suitability for the planned project. We present a suite of seismic techniques applied to a controlled experiment to detect, locate and estimate the size of a karst analog in form of a man-made water shaft on the campus of Qatar University, Doha, Qatar. Seismic waves are well suited for karst detection and characterization. Voids represent high-contrast seismic objects that exhibit strong responses due to incident seismic waves. However, the complex geometry of karst, including shape and size, makes their imaging nontrivial. While karst detection can be reduced to the simple problem of detecting an anomaly, karst characterization can be complicated by the 3D nature of the problem of unknown scale, where irregular surfaces can generate diffracted waves of different kind. In our presentation, we employ a variety of seismic techniques to demonstrate the detection and characterization of a vertical water collection shaft analyzing the phase, amplitude and spectral information of seismic waves that have been scattered by the object. We use the reduction in seismic wave amplitudes and the delay in phase arrival times in the geometrical shadow of the vertical shaft to independently detect and locate the object in space. Additionally, we use narrow band-pass filtered data combining two orthogonal transmission surveys to detect and locate the object. Furthermore, we show that ambient noise recordings may generate data with sufficient signal-to-noise ratio to successfully detect and locate subsurface voids. Being able to use ambient noise recordings would eliminate the need to employ active seismic sources that are time consuming and more expensive to operate.

  5. How Things Work: Metal Locators and Related Devices.

    ERIC Educational Resources Information Center

    Crane, H. Richard, Ed.

    1984-01-01

    Describes a simple form of metal detector, discussing the principles of signal generation, and the detection and discrimination of induced eddy current signals from the located objects. Includes a rough schematic of the detector. (JM)

  6. Explosive hazard detection using MIMO forward-looking ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Shaw, Darren; Ho, K. C.; Stone, Kevin; Keller, James M.; Popescu, Mihail; Anderson, Derek T.; Luke, Robert H.; Burns, Brian

    2015-05-01

    This paper proposes a machine learning algorithm for subsurface object detection on multiple-input-multiple-output (MIMO) forward-looking ground-penetrating radar (FLGPR). By detecting hazards using FLGPR, standoff distances of up to tens of meters can be acquired, but this is at the degradation of performance due to high false alarm rates. The proposed system utilizes an anomaly detection prescreener to identify potential object locations. Alarm locations have multiple one-dimensional (ML) spectral features, two-dimensional (2D) spectral features, and log-Gabor statistic features extracted. The ability of these features to reduce the number of false alarms and increase the probability of detection is evaluated for both co-polarizations present in the Akela MIMO array. Classification is performed by a Support Vector Machine (SVM) with lane-based cross-validation for training and testing. Class imbalance and optimized SVM kernel parameters are considered during classifier training.

  7. Methods, systems and devices for detecting and locating ferromagnetic objects

    DOEpatents

    Roybal, Lyle Gene [Idaho Falls, ID; Kotter, Dale Kent [Shelley, ID; Rohrbaugh, David Thomas [Idaho Falls, ID; Spencer, David Frazer [Idaho Falls, ID

    2010-01-26

    Methods for detecting and locating ferromagnetic objects in a security screening system. One method includes a step of acquiring magnetic data that includes magnetic field gradients detected during a period of time. Another step includes representing the magnetic data as a function of the period of time. Another step includes converting the magnetic data to being represented as a function of frequency. Another method includes a step of sensing a magnetic field for a period of time. Another step includes detecting a gradient within the magnetic field during the period of time. Another step includes identifying a peak value of the gradient detected during the period of time. Another step includes identifying a portion of time within the period of time that represents when the peak value occurs. Another step includes configuring the portion of time over the period of time to represent a ratio.

  8. The MetaTelescope, a System for the Detection of Objects in Low and Higher Earth Orbits

    NASA Astrophysics Data System (ADS)

    Boer, M.

    We present an original design involving several telescopes for the detection of mobiles in space over a very wide field of view. The system uses relatively simple and cheap telescopes associated with commercial CCD cameras that can be placed either in a single location or in relatively close (100m - 10km) locations. This last set-up opens the possibility of detecting parallaxes, but sky conditions should remain almost identical. Areas on the order of 800 square degrees can be surveyed. The system is versatile, i.e. it can detect and follow up objects either in the LEO or higher orbits. We will present the system, how it can be operated in order to have a more efficient setup while using even less telescopes, and possible implementations for space surveillance activities.

  9. Surveillance versus Reconnaissance: An Entropy Based Model

    DTIC Science & Technology

    2012-03-22

    sensor detection since no new information is received. (Berry, Pontecorvo, & Fogg , Optimal Search, Location and Tracking of Surface Maritime Targets by...by Berry, Pontecorvo and Fogg (Berry, Pontecorvo, & Fogg , July, 2003) facilitates the optimal solutions to dynamically determining the allocation and...region (Berry, Pontecorvo, & Fogg , July, 2003). Phase II: Locate During the locate phase, the objective was to determine the location of the targets

  10. Object Detection for Agricultural and Construction Environments Using an Ultrasonic Sensor.

    PubMed

    Dvorak, J S; Stone, M L; Self, K P

    2016-04-01

    This study tested an ultrasonic sensor's ability to detect several objects commonly encountered in outdoor agricultural or construction environments: a water jug, a sheet of oriented strand board (OSB), a metalfence post, a human model, a wooden fence post, a Dracaena plant, a juniper plant, and a dog model. Tests were performed with each target object at distances from 0.01 to 3 m. Five tests were performed with each object at each location, and the sensor's ability to detect the object during each test was categorized as "undetected," "intermittent," "incorrect distance," or "good." Rigid objects that presented a larger surface area to the sensor, such as the water jug and OSB, were better detected than objects with a softer surface texture, which were occasionally not detected as the distance approached 3 m. Objects with extremely soft surface texture, such as the dog model, could be undetected at almost any distance from the sensor. The results of this testing should help designers offuture systems for outdoor environments, as the target objects tested can be found in nearly any agricultural or construction environment.

  11. Assisting people with disabilities in actively performing designated occupational activities with battery-free wireless mice to control environmental stimulation.

    PubMed

    Shih, Ching-Hsiang

    2013-05-01

    The latest researches use software technology (OLDP, object location detection programs) to turn a commercial high-technology product, i.e. a battery-free wireless mouse, into a high performance/precise object location detector to detect whether or not an object has been placed in the designated location. The preferred environmental stimulation is also incorporated to assist those patients in need of occupational activities in performing simple occupational activities to acquire their preferred environmental stimulation. The result of the experiment shows that both participants have been able to control their preferred environmental stimulation by actively performing occupational activities. This study is going to extend the aforementioned researches by using battery-free wireless mice to assist patients in performing more complicated occupational activities. The ABAB design has been adopted for experiments, and the result shows that during intervention phrases, the occupational activities of both participants are significantly improved. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Fourier Domain Sensing

    NASA Technical Reports Server (NTRS)

    Feldkhun, Daniel (Inventor); Wagner, Kelvin H. (Inventor)

    2013-01-01

    Methods and systems are disclosed of sensing an object. A first radiation is spatially modulated to generate a structured second radiation. The object is illuminated with the structured second radiation such that the object produces a third radiation in response. Apart from any spatially dependent delay, a time variation of the third radiation is spatially independent. With a single-element detector, a portion of the third radiation is detected from locations on the object simultaneously. At least one characteristic of a sinusoidal spatial Fourier-transform component of the object is estimated from a time-varying signal from the detected portion of the third radiation.

  13. A Real-Time Method to Estimate Speed of Object Based on Object Detection and Optical Flow Calculation

    NASA Astrophysics Data System (ADS)

    Liu, Kaizhan; Ye, Yunming; Li, Xutao; Li, Yan

    2018-04-01

    In recent years Convolutional Neural Network (CNN) has been widely used in computer vision field and makes great progress in lots of contents like object detection and classification. Even so, combining Convolutional Neural Network, which means making multiple CNN frameworks working synchronously and sharing their output information, could figure out useful message that each of them cannot provide singly. Here we introduce a method to real-time estimate speed of object by combining two CNN: YOLOv2 and FlowNet. In every frame, YOLOv2 provides object size; object location and object type while FlowNet providing the optical flow of whole image. On one hand, object size and object location help to select out the object part of optical flow image thus calculating out the average optical flow of every object. On the other hand, object type and object size help to figure out the relationship between optical flow and true speed by means of optics theory and priori knowledge. Therefore, with these two key information, speed of object can be estimated. This method manages to estimate multiple objects at real-time speed by only using a normal camera even in moving status, whose error is acceptable in most application fields like manless driving or robot vision.

  14. Tracking-Learning-Detection.

    PubMed

    Kalal, Zdenek; Mikolajczyk, Krystian; Matas, Jiri

    2012-07-01

    This paper investigates long-term tracking of unknown objects in a video stream. The object is defined by its location and extent in a single frame. In every frame that follows, the task is to determine the object's location and extent or indicate that the object is not present. We propose a novel tracking framework (TLD) that explicitly decomposes the long-term tracking task into tracking, learning, and detection. The tracker follows the object from frame to frame. The detector localizes all appearances that have been observed so far and corrects the tracker if necessary. The learning estimates the detector's errors and updates it to avoid these errors in the future. We study how to identify the detector's errors and learn from them. We develop a novel learning method (P-N learning) which estimates the errors by a pair of "experts": (1) P-expert estimates missed detections, and (2) N-expert estimates false alarms. The learning process is modeled as a discrete dynamical system and the conditions under which the learning guarantees improvement are found. We describe our real-time implementation of the TLD framework and the P-N learning. We carry out an extensive quantitative evaluation which shows a significant improvement over state-of-the-art approaches.

  15. Detection, Identification, Location, and Remote Sensing using SAW RFID Sensor Tags

    NASA Technical Reports Server (NTRS)

    Barton, Richard J.

    2009-01-01

    In this presentation, we will consider the problem of simultaneous detection, identification, location estimation, and remote sensing for multiple objects. In particular, we will describe the design and testing of a wireless system capable of simultaneously detecting the presence of multiple objects, identifying each object, and acquiring both a low-resolution estimate of location and a high-resolution estimate of temperature for each object based on wireless interrogation of passive surface acoustic wave (SAW) radiofrequency identification (RFID) sensor tags affixed to each object. The system is being studied for application on the lunar surface as well as for terrestrial remote sensing applications such as pre-launch monitoring and testing of spacecraft on the launch pad and monitoring of test facilities. The system utilizes a digitally beam-formed planar receiving antenna array to extend range and provide direction-of-arrival information coupled with an approximate maximum-likelihood signal processing algorithm to provide near-optimal estimation of both range and temperature. The system is capable of forming a large number of beams within the field of view and resolving the information from several tags within each beam. The combination of both spatial and waveform discrimination provides the capability to track and monitor telemetry from a large number of objects appearing simultaneously within the field of view of the receiving array. In the presentation, we will summarize the system design and illustrate several aspects of the operational characteristics and signal structure. We will examine the theoretical performance characteristics of the system and compare the theoretical results with results obtained from experiments in both controlled laboratory environments and in the field.

  16. An object detection and tracking system for unmanned surface vehicles

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Xiao, Yang; Fang, Zhiwen; Zhang, Naiwen; Wang, Li; Li, Tao

    2017-10-01

    Object detection and tracking are critical parts of unmanned surface vehicles(USV) to achieve automatic obstacle avoidance. Off-the-shelf object detection methods have achieved impressive accuracy in public datasets, though they still meet bottlenecks in practice, such as high time consumption and low detection quality. In this paper, we propose a novel system for USV, which is able to locate the object more accurately while being fast and stable simultaneously. Firstly, we employ Faster R-CNN to acquire several initial raw bounding boxes. Secondly, the image is segmented to a few superpixels. For each initial box, the superpixels inside will be grouped into a whole according to a combination strategy, and a new box is thereafter generated as the circumscribed bounding box of the final superpixel. Thirdly, we utilize KCF to track these objects after several frames, Faster-RCNN is again used to re-detect objects inside tracked boxes to prevent tracking failure as well as remove empty boxes. Finally, we utilize Faster R-CNN to detect objects in the next image, and refine object boxes by repeating the second module of our system. The experimental results demonstrate that our system is fast, robust and accurate, which can be applied to USV in practice.

  17. Ultrasound detection of simulated intra-ocular foreign bodies by minimally trained personnel.

    PubMed

    Sargsyan, Ashot E; Dulchavsky, Alexandria G; Adams, James; Melton, Shannon; Hamilton, Douglas R; Dulchavsky, Scott A

    2008-01-01

    To test the ability of non-expert ultrasound operators of divergent backgrounds to detect the presence, size, location, and composition of foreign bodies in an ocular model. High school students (N = 10) and NASA astronauts (N = 4) completed a brief ultrasound training session which focused on basic ultrasound principles and the detection of foreign bodies. The operators used portable ultrasound devices to detect foreign objects of varying location, size (0.5-2 mm), and material (glass, plastic, metal) in a gelatinous ocular model. Operator findings were compared to known foreign object parameters and ultrasound experts (N = 2) to determine accuracy across and between groups. Ultrasound had high sensitivity (astronauts 85%, students 87%, and experts 100%) and specificity (astronauts 81%, students 83%, and experts 95%) for the detection of foreign bodies. All user groups were able to accurately detect the presence of foreign bodies in this model (astronauts 84%, students 81%, and experts 97%). Astronaut and student sensitivity results for material (64% vs. 48%), size (60% vs. 46%), and position (77% vs. 64%) were not statistically different. Experts' results for material (85%), size (90%), and position (98%) were higher; however, the small sample size precluded statistical conclusions. Ultrasound can be used by operators with varying training to detect the presence, location, and composition of intraocular foreign bodies with high sensitivity, specificity, and accuracy.

  18. Human Location Detection System Using Micro-Electromechanical Sensor for Intelligent Fan

    NASA Astrophysics Data System (ADS)

    Parnin, S.; Rahman, M. M.

    2017-03-01

    This paper presented the development of sensory system for detection of both the presence and the location of human in a room spaces using MEMS Thermal sensor. The system is able to detect the surface temperature of occupants by a non-contact detection at the maximum of 6 meters far. It can be integrated to any swing type of electrical appliances such as standing fan or a similar devices. Differentiating human from other moving and or static object by heat variable is nearly impossible since human, animals and electrical appliances produce heat. The uncontrollable heat properties which can change and transfer will add to the detection issue. Integrating the low cost MEMS based thermal sensor can solve the first of human sensing problem by its ability to detect human in stationary. Further discrimination and analysis must therefore be made to the measured temperature data to distinguish human from other objects. In this project, the fan is properly designed and program in such a way that it can adapt to different events starting from the human sensing stage to its dynamic and mechanical moving parts. Up to this stage initial testing to the Omron D6T microelectromechanical thermal sensor is currently under several experimental stages. Experimental result of the sensor tested on stationary and motion state of human are behaviorally differentiable and successfully locate the human position by detecting the maximum temperature of each sensor reading.

  19. Does visual working memory represent the predicted locations of future target objects? An event-related brain potential study.

    PubMed

    Grubert, Anna; Eimer, Martin

    2015-11-11

    During the maintenance of task-relevant objects in visual working memory, the contralateral delay activity (CDA) is elicited over the hemisphere opposite to the visual field where these objects are presented. The presence of this lateralised CDA component demonstrates the existence of position-dependent object representations in working memory. We employed a change detection task to investigate whether the represented object locations in visual working memory are shifted in preparation for the known location of upcoming comparison stimuli. On each trial, bilateral memory displays were followed after a delay period by bilateral test displays. Participants had to encode and maintain three visual objects on one side of the memory display, and to judge whether they were identical or different to three objects in the test display. Task-relevant memory and test stimuli were located in the same visual hemifield in the no-shift task, and on opposite sides in the horizontal shift task. CDA components of similar size were triggered contralateral to the memorized objects in both tasks. The absence of a polarity reversal of the CDA in the horizontal shift task demonstrated that there was no preparatory shift of memorized object location towards the side of the upcoming comparison stimuli. These results suggest that visual working memory represents the locations of visual objects during encoding, and that the matching of memorized and test objects at different locations is based on a comparison process that can bridge spatial translations between these objects. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Application of Frequency of Detection Methods in Design and Optimization of the INL Site Ambient Air Monitoring Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rood, Arthur S.; Sondrup, A. Jeffrey

    This report presents an evaluation of a hypothetical INL Site monitoring network and the existing INL air monitoring network using frequency of detection methods. The hypothetical network was designed to address the requirement in 40 CFR Part 61, Subpart H (2006) that “emissions of radionuclides to ambient air from U.S. DOE facilities shall not exceed those amounts that would cause any member of the public to receive in any year an effective dose equivalent exceeding 10 mrem/year.” To meet the requirement for monitoring only, “radionuclide releases that would result in an effective dose of 10% of the standard shall bemore » readily detectable and distinguishable from background.” Thus, the hypothetical network consists of air samplers placed at residence locations that surround INL and at other locations where onsite livestock grazing takes place. Two exposure scenarios were used in this evaluation: a resident scenario and a shepherd/rancher scenario. The resident was assumed to be continuously present at their residence while the shepherd/rancher was assumed to be present 24-hours at a fixed location on the grazing allotment. Important radionuclides were identified from annual INL radionuclide National Emission Standards for Hazardous Pollutants reports. Important radionuclides were defined as those that potentially contribute 1% or greater to the annual total dose at the radionuclide National Emission Standards for Hazardous Pollutants maximally exposed individual location and include H-3, Am-241, Pu-238, Pu 239, Cs-137, Sr-90, and I-131. For this evaluation, the network performance objective was set at achieving a frequency of detection greater than or equal to 95%. Results indicated that the hypothetical network for the resident scenario met all performance objectives for H-3 and I-131 and most performance objectives for Cs-137 and Sr-90. However, all actinides failed to meet the performance objectives for most sources. The shepherd/rancher scenario showed that air samplers placed around the facilities every 22.5 degrees were very effective in detecting releases, but this arrangement is not practical or cost effective. However, it was shown that a few air samplers placed in the prevailing wind direction around each facility could achieve the performance objective of a frequency of detection greater than or equal to 95% for the shepherd/rancher scenario. The results also indicate some of the current sampler locations have little or no impact on the network frequency of detection and could be removed from the network with no appreciable deterioration of performance. Results show that with some slight modifications to the existing network (i.e., additional samplers added north and south of the Materials and Fuels Complex and ineffective samplers removed), the network would achieve performance objectives for all sources for both the resident and shepherd/rancher scenario.« less

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eads, Damian Ryan; Rosten, Edward; Helmbold, David

    The authors present BEAMER: a new spatially exploitative approach to learning object detectors which shows excellent results when applied to the task of detecting objects in greyscale aerial imagery in the presence of ambiguous and noisy data. There are four main contributions used to produce these results. First, they introduce a grammar-guided feature extraction system, enabling the exploration of a richer feature space while constraining the features to a useful subset. This is specified with a rule-based generative grammer crafted by a human expert. Second, they learn a classifier on this data using a newly proposed variant of AdaBoost whichmore » takes into account the spatially correlated nature of the data. Third, they perform another round of training to optimize the method of converting the pixel classifications generated by boosting into a high quality set of (x,y) locations. lastly, they carefully define three common problems in object detection and define two evaluation criteria that are tightly matched to these problems. Major strengths of this approach are: (1) a way of randomly searching a broad feature space, (2) its performance when evaluated on well-matched evaluation criteria, and (3) its use of the location prediction domain to learn object detectors as well as to generate detections that perform well on several tasks: object counting, tracking, and target detection. They demonstrate the efficacy of BEAMER with a comprehensive experimental evaluation on a challenging data set.« less

  2. Hiding the Source Based on Limited Flooding for Sensor Networks.

    PubMed

    Chen, Juan; Lin, Zhengkui; Hu, Ying; Wang, Bailing

    2015-11-17

    Wireless sensor networks are widely used to monitor valuable objects such as rare animals or armies. Once an object is detected, the source, i.e., the sensor nearest to the object, generates and periodically sends a packet about the object to the base station. Since attackers can capture the object by localizing the source, many protocols have been proposed to protect source location. Instead of transmitting the packet to the base station directly, typical source location protection protocols first transmit packets randomly for a few hops to a phantom location, and then forward the packets to the base station. The problem with these protocols is that the generated phantom locations are usually not only near the true source but also close to each other. As a result, attackers can easily trace a route back to the source from the phantom locations. To address the above problem, we propose a new protocol for source location protection based on limited flooding, named SLP. Compared with existing protocols, SLP can generate phantom locations that are not only far away from the source, but also widely distributed. It improves source location security significantly with low communication cost. We further propose a protocol, namely SLP-E, to protect source location against more powerful attackers with wider fields of vision. The performance of our SLP and SLP-E are validated by both theoretical analysis and simulation results.

  3. Object Detection Techniques Applied on Mobile Robot Semantic Navigation

    PubMed Central

    Astua, Carlos; Barber, Ramon; Crespo, Jonathan; Jardon, Alberto

    2014-01-01

    The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency. PMID:24732101

  4. Multi-class geospatial object detection based on a position-sensitive balancing framework for high spatial resolution remote sensing imagery

    NASA Astrophysics Data System (ADS)

    Zhong, Yanfei; Han, Xiaobing; Zhang, Liangpei

    2018-04-01

    Multi-class geospatial object detection from high spatial resolution (HSR) remote sensing imagery is attracting increasing attention in a wide range of object-related civil and engineering applications. However, the distribution of objects in HSR remote sensing imagery is location-variable and complicated, and how to accurately detect the objects in HSR remote sensing imagery is a critical problem. Due to the powerful feature extraction and representation capability of deep learning, the deep learning based region proposal generation and object detection integrated framework has greatly promoted the performance of multi-class geospatial object detection for HSR remote sensing imagery. However, due to the translation caused by the convolution operation in the convolutional neural network (CNN), although the performance of the classification stage is seldom influenced, the localization accuracies of the predicted bounding boxes in the detection stage are easily influenced. The dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage has not been addressed for HSR remote sensing imagery, and causes position accuracy problems for multi-class geospatial object detection with region proposal generation and object detection. In order to further improve the performance of the region proposal generation and object detection integrated framework for HSR remote sensing imagery object detection, a position-sensitive balancing (PSB) framework is proposed in this paper for multi-class geospatial object detection from HSR remote sensing imagery. The proposed PSB framework takes full advantage of the fully convolutional network (FCN), on the basis of a residual network, and adopts the PSB framework to solve the dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage. In addition, a pre-training mechanism is utilized to accelerate the training procedure and increase the robustness of the proposed algorithm. The proposed algorithm is validated with a publicly available 10-class object detection dataset.

  5. Border-oriented post-processing refinement on detected vehicle bounding box for ADAS

    NASA Astrophysics Data System (ADS)

    Chen, Xinyuan; Zhang, Zhaoning; Li, Minne; Li, Dongsheng

    2018-04-01

    We investigate a new approach for improving localization accuracy of detected vehicles for object detection in advanced driver assistance systems(ADAS). Specifically, we implement a bounding box refinement as a post-processing of the state-of-the-art object detectors (Faster R-CNN, YOLOv2, etc.). The bounding box refinement is achieved by individually adjusting each border of the detected bounding box to its target location using a regression method. We use HOG features which perform well on the edge detection of vehicles to train the regressor and the regressor is independent of the CNN-based object detectors. Experiment results on the KITTI 2012 benchmark show that we can achieve up to 6% improvements over YOLOv2 and Faster R-CNN object detectors on the IoU threshold of 0.8. Also, the proposed refinement framework is computationally light, allowing for processing one bounding box within a few milliseconds on CPU. Further, this refinement method can be added to any object detectors, especially those with high speed but less accuracy.

  6. MUSIC algorithms for rebar detection

    NASA Astrophysics Data System (ADS)

    Solimene, Raffaele; Leone, Giovanni; Dell'Aversano, Angela

    2013-12-01

    The MUSIC (MUltiple SIgnal Classification) algorithm is employed to detect and localize an unknown number of scattering objects which are small in size as compared to the wavelength. The ensemble of objects to be detected consists of both strong and weak scatterers. This represents a scattering environment challenging for detection purposes as strong scatterers tend to mask the weak ones. Consequently, the detection of more weakly scattering objects is not always guaranteed and can be completely impaired when the noise corrupting data is of a relatively high level. To overcome this drawback, here a new technique is proposed, starting from the idea of applying a two-stage MUSIC algorithm. In the first stage strong scatterers are detected. Then, information concerning their number and location is employed in the second stage focusing only on the weak scatterers. The role of an adequate scattering model is emphasized to improve drastically detection performance in realistic scenarios.

  7. A Wireless Object Location Detector Enabling People with Developmental Disabilities to Control Environmental Stimulation through Simple Occupational Activities with Nintendo Wii Balance Boards

    ERIC Educational Resources Information Center

    Shih, Ching-Hsiang; Chang, Man-Ling

    2012-01-01

    The latest researches have adopted software technology, turning the Nintendo Wii Balance Board into a high performance standing location detector with a newly developed standing location detection program (SLDP). This study extended SLDP functionality to assess whether two people with developmental disabilities would be able to actively perform…

  8. An eye tracking investigation of color-location binding in infants' visual short-term memory.

    PubMed

    Oakes, Lisa M; Baumgartner, Heidi A; Kanjlia, Shipra; Luck, Steven J

    2017-01-01

    Two experiments examined 8- and 10-month-old infants' ( N = 71) binding of object identity (color) and location information in visual short-term memory (VSTM) using a one-shot change detection task . Building on previous work using the simultaneous streams change detection task, we confirmed that 8- and 10-month-old infants are sensitive to changes in binding between identity and location in VSTM. Further, we demonstrated that infants recognize specifically what changed in these events. Thus, infants' VSTM for binding is robust and can be observed in different procedures and with different stimuli.

  9. Dax Gets the Nod: Toddlers Detect and Use Social Cues to Evaluate Testimony

    PubMed Central

    Fusaro, Maria; Harris, Paul L.

    2016-01-01

    Children ages 18 and 24 months were assessed for the ability to understand and learn from an adult’s nonverbal expression of agreement and disagreement with a speaker’s claims. In one type of communicative exchange, a speaker made 2 different claims about the identity or location of an object. The hearer nodded her head in agreement with one claim and shook her head in disagreement with the other claim. In a second type of exchange, the speaker asked 2 different questions about the identity or location of an object. The hearer nodded her head in response to one question and shook her head in response to the other. The 24-month-olds grasped the implication of these gestural responses, by inferring the correct name or location of the object. The 18-month-olds showed a limited grasp of their implications. Thus, in learning from others’ testimony, toddlers focus not only on the claims of a single speaker but also on whether that information is accepted or rejected by another hearer. In particular, they detect and act on social cues of assent and dissent. PMID:23127298

  10. A biological hierarchical model based underwater moving object detection.

    PubMed

    Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen

    2014-01-01

    Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results.

  11. A Biological Hierarchical Model Based Underwater Moving Object Detection

    PubMed Central

    Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen

    2014-01-01

    Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results. PMID:25140194

  12. The model of the optical-electronic control system of vehicles location at level crossing

    NASA Astrophysics Data System (ADS)

    Verezhinskaia, Ekaterina A.; Gorbachev, Aleksei A.; Maruev, Ivan A.; Shavrygina, Margarita A.

    2016-04-01

    Level crossing - one of the most dangerous sections of the road network, where railway line crosses motor road at the same level. The collision of trains with vehicles at a level crossing is a serious type of road traffic accidents. The purpose of this research is to develop complex optical electronic control system of vehicles location in the dangerous zone of level crossing. The system consists of registration blocks (including photodetector, lens, infrared emitting diode), determinant devices and camera installed within the boundaries of level crossing. The system performs detection of objects (vehicles) by analysing the time of the object movement opposite to the registration block and level of the reflected signal from the object. The paper presents theoretical description and experimental research of main principles of the system operation. Experimental research of the system model with selected optical-electronic components have confirmed the possibility of metal objects detection at the required distance (0.5 - 2 m) with different values of background illuminance.

  13. Gamma watermarking

    DOEpatents

    Ishikawa, Muriel Y.; Wood, Lowell L.; Lougheed, Ronald W.; Moody, Kenton J.; Wang, Tzu-Fang

    2004-05-25

    A covert, gamma-ray "signature" is used as a "watermark" for property identification. This new watermarking technology is based on a unique steganographic or "hidden writing" digital signature, implemented in tiny quantities of gamma-ray-emitting radioisotopic material combinations, generally covertly emplaced on or within an object. This digital signature may be readily recovered at distant future times, by placing a sensitive, high energy-resolution gamma-ray detecting instrument reasonably precisely over the location of the watermark, which location may be known only to the object's owner; however, the signature is concealed from all ordinary detection means because its exceedingly low level of activity is obscured by the natural radiation background (including the gamma radiation naturally emanating from the object itself, from cosmic radiation and material surroundings, from human bodies, etc.). The "watermark" is used in object-tagging for establishing object identity, history or ownership. It thus may serve as an aid to law enforcement officials in identifying stolen property and prosecuting theft thereof. Highly effective, potentially very low cost identification-on demand of items of most all types is thus made possible.

  14. Feasibility of real-time location systems in monitoring recovery after major abdominal surgery.

    PubMed

    Dorrell, Robert D; Vermillion, Sarah A; Clark, Clancy J

    2017-12-01

    Early mobilization after major abdominal surgery decreases postoperative complications and length of stay, and has become a key component of enhanced recovery pathways. However, objective measures of patient movement after surgery are limited. Real-time location systems (RTLS), typically used for asset tracking, provide a novel approach to monitoring in-hospital patient activity. The current study investigates the feasibility of using RTLS to objectively track postoperative patient mobilization. The real-time location system employs a meshed network of infrared and RFID sensors and detectors that sample device locations every 3 s resulting in over 1 million data points per day. RTLS tracking was evaluated systematically in three phases: (1) sensitivity and specificity of the tracking device using simulated patient scenarios, (2) retrospective passive movement analysis of patient-linked equipment, and (3) prospective observational analysis of a patient-attached tracking device. RTLS tracking detected a simulated movement out of a room with sensitivity of 91% and specificity 100%. Specificity decreased to 75% if time out of room was less than 3 min. All RTLS-tagged patient-linked equipment was identified for 18 patients, but measurable patient movement associated with equipment was detected for only 2 patients (11%) with 1-8 out-of-room walks per day. Ten patients were prospectively monitored using RTLS badges following major abdominal surgery. Patient movement was recorded using patient diaries, direct observation, and an accelerometer. Sensitivity and specificity of RTLS patient tracking were both 100% in detecting out-of-room ambulation and correlated well with direct observation and patient-reported ambulation. Real-time location systems are a novel technology capable of objectively and accurately monitoring patient movement and provide an innovative approach to promoting early mobilization after surgery.

  15. Color object detection using spatial-color joint probability functions.

    PubMed

    Luo, Jiebo; Crandall, David

    2006-06-01

    Object detection in unconstrained images is an important image understanding problem with many potential applications. There has been little success in creating a single algorithm that can detect arbitrary objects in unconstrained images; instead, algorithms typically must be customized for each specific object. Consequently, it typically requires a large number of exemplars (for rigid objects) or a large amount of human intuition (for nonrigid objects) to develop a robust algorithm. We present a robust algorithm designed to detect a class of compound color objects given a single model image. A compound color object is defined as having a set of multiple, particular colors arranged spatially in a particular way, including flags, logos, cartoon characters, people in uniforms, etc. Our approach is based on a particular type of spatial-color joint probability function called the color edge co-occurrence histogram. In addition, our algorithm employs perceptual color naming to handle color variation, and prescreening to limit the search scope (i.e., size and location) for the object. Experimental results demonstrated that the proposed algorithm is insensitive to object rotation, scaling, partial occlusion, and folding, outperforming a closely related algorithm based on color co-occurrence histograms by a decisive margin.

  16. Berkeley UXO Discriminator (BUD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasperikova, Erika; Smith, J. Torquil; Morrison, H. Frank

    2007-01-01

    The Berkeley UXO Discriminator (BUD) is an optimally designed active electromagnetic system that not only detects but also characterizes UXO. The system incorporates three orthogonal transmitters and eight pairs of differenced receivers. it has two modes of operation: (1) search mode, in which BUD moves along a profile and exclusively detects targets in its vicinity, providing target depth and horizontal location, and (2) discrimination mode, in which BUD, stationary above a target, from a single position, determines three discriminating polarizability responses together with the object location and orientation. The performance of the system is governed by a target size-depth curve.more » Maximum detection depth is 1.5 m. While UXO objects have a single major polarizability coincident with the long axis of the object and two equal transverse polarizabilities, scrap metal has three different principal polarizabilities. The results clearly show that there are very clear distinctions between symmetric intact UXO and irregular scrap metal, and that BUD can resolve the intrinsic polarizabilities of the target. The field survey at the Yuma Proving Ground in Arizona showed excellent results within the predicted size-depth range.« less

  17. Deep Learning for Real-Time Capable Object Detection and Localization on Mobile Platforms

    NASA Astrophysics Data System (ADS)

    Particke, F.; Kolbenschlag, R.; Hiller, M.; Patiño-Studencki, L.; Thielecke, J.

    2017-10-01

    Industry 4.0 is one of the most formative terms in current times. Subject of research are particularly smart and autonomous mobile platforms, which enormously lighten the workload and optimize production processes. In order to interact with humans, the platforms need an in-depth knowledge of the environment. Hence, it is required to detect a variety of static and non-static objects. Goal of this paper is to propose an accurate and real-time capable object detection and localization approach for the use on mobile platforms. A method is introduced to use the powerful detection capabilities of a neural network for the localization of objects. Therefore, detection information of a neural network is combined with depth information from a RGB-D camera, which is mounted on a mobile platform. As detection network, YOLO Version 2 (YOLOv2) is used on a mobile robot. In order to find the detected object in the depth image, the bounding boxes, predicted by YOLOv2, are mapped to the corresponding regions in the depth image. This provides a powerful and extremely fast approach for establishing a real-time-capable Object Locator. In the evaluation part, the localization approach turns out to be very accurate. Nevertheless, it is dependent on the detected object itself and some additional parameters, which are analysed in this paper.

  18. Efficient Spatiotemporal Clutter Rejection and Nonlinear Filtering-based Dim Resolved and Unresolved Object Tracking Algorithms

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.

    2013-09-01

    We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.

  19. Perceiving environmental structure from optical motion

    NASA Technical Reports Server (NTRS)

    Lappin, Joseph S.

    1991-01-01

    Generally speaking, one of the most important sources of optical information about environmental structure is known to be the deforming optical patterns produced by the movements of the observer (pilot) or environmental objects. As an observer moves through a rigid environment, the projected optical patterns of environmental objects are systematically transformed according to their orientations and positions in 3D space relative to those of the observer. The detailed characteristics of these deforming optical patterns carry information about the 3D structure of the objects and about their locations and orientations relative to those of the observer. The specific geometrical properties of moving images that may constitute visually detected information about the shapes and locations of environmental objects is examined.

  20. Laser-based structural sensing and surface damage detection

    NASA Astrophysics Data System (ADS)

    Guldur, Burcu

    Damage due to age or accumulated damage from hazards on existing structures poses a worldwide problem. In order to evaluate the current status of aging, deteriorating and damaged structures, it is vital to accurately assess the present conditions. It is possible to capture the in situ condition of structures by using laser scanners that create dense three-dimensional point clouds. This research investigates the use of high resolution three-dimensional terrestrial laser scanners with image capturing abilities as tools to capture geometric range data of complex scenes for structural engineering applications. Laser scanning technology is continuously improving, with commonly available scanners now capturing over 1,000,000 texture-mapped points per second with an accuracy of ~2 mm. However, automatically extracting meaningful information from point clouds remains a challenge, and the current state-of-the-art requires significant user interaction. The first objective of this research is to use widely accepted point cloud processing steps such as registration, feature extraction, segmentation, surface fitting and object detection to divide laser scanner data into meaningful object clusters and then apply several damage detection methods to these clusters. This required establishing a process for extracting important information from raw laser-scanned data sets such as the location, orientation and size of objects in a scanned region, and location of damaged regions on a structure. For this purpose, first a methodology for processing range data to identify objects in a scene is presented and then, once the objects from model library are correctly detected and fitted into the captured point cloud, these fitted objects are compared with the as-is point cloud of the investigated object to locate defects on the structure. The algorithms are demonstrated on synthetic scenes and validated on range data collected from test specimens and test-bed bridges. The second objective of this research is to combine useful information extracted from laser scanner data with color information, which provides information in the fourth dimension that enables detection of damage types such as cracks, corrosion, and related surface defects that are generally difficult to detect using only laser scanner data; moreover, the color information also helps to track volumetric changes on structures such as spalling. Although using images with varying resolution to detect cracks is an extensively researched topic, damage detection using laser scanners with and without color images is a new research area that holds many opportunities for enhancing the current practice of visual inspections. The aim is to combine the best features of laser scans and images to create an automatic and effective surface damage detection method, which will reduce the need for skilled labor during visual inspections and allow automatic documentation of related information. This work enables developing surface damage detection strategies that integrate existing condition rating criteria for a wide range damage types that are collected under three main categories: small deformations already existing on the structure (cracks); damage types that induce larger deformations, but where the initial topology of the structure has not changed appreciably (e.g., bent members); and large deformations where localized changes in the topology of the structure have occurred (e.g., rupture, discontinuities and spalling). The effectiveness of the developed damage detection algorithms are validated by comparing the detection results with the measurements taken from test specimens and test-bed bridges.

  1. Deep Space Wide Area Search Strategies

    NASA Astrophysics Data System (ADS)

    Capps, M.; McCafferty, J.

    There is an urgent need to expand the space situational awareness (SSA) mission beyond catalog maintenance to providing near real-time indications and warnings of emerging events. While building and maintaining a catalog of space objects is essential to SSA, this does not address the threat of uncatalogued and uncorrelated deep space objects. The Air Force therefore has an interest in transformative technologies to scan the geostationary (GEO) belt for uncorrelated space objects. Traditional ground based electro-optical sensors are challenged in simultaneously detecting dim objects while covering large areas of the sky using current CCD technology. Time delayed integration (TDI) scanning has the potential to enable significantly larger coverage rates while maintaining sensitivity for detecting near-GEO objects. This paper investigates strategies of employing TDI sensing technology from a ground based electro-optical telescope, toward providing tactical indications and warnings of deep space threats. We present results of a notional wide area search TDI sensor that scans the GEO belt from three locations: Maui, New Mexico, and Diego Garcia. Deep space objects in the NASA 2030 debris catalog are propagated over multiple nights as an indicative data set to emulate notional uncatalogued near-GEO orbits which may be encountered by the TDI sensor. Multiple scan patterns are designed and simulated, to compare and contrast performance based on 1) efficiency in coverage, 2) number of objects detected, and 3) rate at which detections occur, to enable follow-up observations by other space surveillance network (SSN) sensors. A step-stare approach is also modeled using a dedicated, co-located sensor notionally similar to the Ground-Based Electro-Optical Deep Space Surveillance (GEODSS) tower. Equivalent sensitivities are assumed. This analysis quantifies the relative benefit of TDI scanning for the wide area search mission.

  2. Using unmanned aerial vehicle-borne magnetic sensors to detect and locate improvised explosive devices and unexploded ordnance

    NASA Astrophysics Data System (ADS)

    Trammell, Hoke S., III; Perry, Alexander R.; Kumar, Sankaran; Czipott, Peter V.; Whitecotton, Brian R.; McManus, Tobin J.; Walsh, David O.

    2005-05-01

    Magnetic sensors configured as a tensor magnetic gradiometer not only detect magnetic targets, but also determine their location and their magnetic moment. Magnetic moment information can be used to characterize and classify objects. Unexploded ordnance (UXO) and thus many types of improvised explosive device (IED) contain steel, and thus can be detected magnetically. Suitable unmanned aerial vehicle (UAV) platforms, both gliders and powered craft, can enable coverage of a search area much more rapidly than surveys using, for instance, total-field magnetometers. We present data from gradiometer passes over different shells using a gradiometer mounted on a moving cart. We also provide detection range and speed estimates for aerial detection by a UAV.

  3. TU-EF-204-11: Impact of Using Multi-Slice Training Sets On the Performance of a Channelized Hotelling Observer in a Low-Contrast Detection Task in CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Favazza, C; Yu, L; Leng, S

    2015-06-15

    Purpose: To investigate using multiple CT image slices from a single acquisition as independent training images for a channelized Hotelling observer (CHO) model to reduce the number of repeated scans for CHO-based CT image quality assessment. Methods: We applied a previously validated CHO model to detect low contrast disk objects formed from cross-sectional images of three epoxy-resin-based rods (diameters: 3, 5, and 9 mm; length: ∼5cm). The rods were submerged in a 35x 25 cm2 iodine-doped water filled phantom, yielding-15 HU object contrast. The phantom was scanned 100 times with and without the rods present. Scan and reconstruction parameters include:more » 5 mm slice thickness at 0.5 mm intervals, 120 kV, 480 Quality Reference mAs, and a 128-slice scanner. The CHO’s detectability index was evaluated as a function of factors related to incorporating multi-slice image data: object misalignment along the z-axis, inter-slice pixel correlation, and number of unique slice locations. In each case, the CHO training set was fixed to 100 images. Results: Artificially shifting the object’s center position by as much as 3 pixels in any direction relative to the Gabor channel filters had insignificant impact on object detectability. An inter-slice pixel correlation of >∼0.2 yielded positive bias in the model’s performance. Incorporating multi-slice image data yielded slight negative bias in detectability with increasing number of slices, likely due to physical variations in the objects. However, inclusion of image data from up to 5 slice locations yielded detectability indices within measurement error of the single slice value. Conclusion: For the investigated model and task, incorporating image data from 5 different slice locations of at least 5 mm intervals into the CHO model yielded detectability indices within measurement error of the single slice value. Consequently, this methodology would Result in a 5-fold reduction in number of image acquisitions. This project was supported by National Institutes of Health grants R01 EB017095 and U01 EB017185 from the National Institute of Biomedical Imaging and Bioengineering.« less

  4. Functional connectivity supporting the selective maintenance of feature-location binding in visual working memory

    PubMed Central

    Takahama, Sachiko; Saiki, Jun

    2014-01-01

    Information on an object's features bound to its location is very important for maintaining object representations in visual working memory. Interactions with dynamic multi-dimensional objects in an external environment require complex cognitive control, including the selective maintenance of feature-location binding. Here, we used event-related functional magnetic resonance imaging to investigate brain activity and functional connectivity related to the maintenance of complex feature-location binding. Participants were required to detect task-relevant changes in feature-location binding between objects defined by color, orientation, and location. We compared a complex binding task requiring complex feature-location binding (color-orientation-location) with a simple binding task in which simple feature-location binding, such as color-location, was task-relevant and the other feature was task-irrelevant. Univariate analyses showed that the dorsolateral prefrontal cortex (DLPFC), hippocampus, and frontoparietal network were activated during the maintenance of complex feature-location binding. Functional connectivity analyses indicated cooperation between the inferior precentral sulcus (infPreCS), DLPFC, and hippocampus during the maintenance of complex feature-location binding. In contrast, the connectivity for the spatial updating of simple feature-location binding determined by reanalyzing the data from Takahama et al. (2010) demonstrated that the superior parietal lobule (SPL) cooperated with the DLPFC and hippocampus. These results suggest that the connectivity for complex feature-location binding does not simply reflect general memory load and that the DLPFC and hippocampus flexibly modulate the dorsal frontoparietal network, depending on the task requirements, with the infPreCS involved in the maintenance of complex feature-location binding and the SPL involved in the spatial updating of simple feature-location binding. PMID:24917833

  5. Functional connectivity supporting the selective maintenance of feature-location binding in visual working memory.

    PubMed

    Takahama, Sachiko; Saiki, Jun

    2014-01-01

    Information on an object's features bound to its location is very important for maintaining object representations in visual working memory. Interactions with dynamic multi-dimensional objects in an external environment require complex cognitive control, including the selective maintenance of feature-location binding. Here, we used event-related functional magnetic resonance imaging to investigate brain activity and functional connectivity related to the maintenance of complex feature-location binding. Participants were required to detect task-relevant changes in feature-location binding between objects defined by color, orientation, and location. We compared a complex binding task requiring complex feature-location binding (color-orientation-location) with a simple binding task in which simple feature-location binding, such as color-location, was task-relevant and the other feature was task-irrelevant. Univariate analyses showed that the dorsolateral prefrontal cortex (DLPFC), hippocampus, and frontoparietal network were activated during the maintenance of complex feature-location binding. Functional connectivity analyses indicated cooperation between the inferior precentral sulcus (infPreCS), DLPFC, and hippocampus during the maintenance of complex feature-location binding. In contrast, the connectivity for the spatial updating of simple feature-location binding determined by reanalyzing the data from Takahama et al. (2010) demonstrated that the superior parietal lobule (SPL) cooperated with the DLPFC and hippocampus. These results suggest that the connectivity for complex feature-location binding does not simply reflect general memory load and that the DLPFC and hippocampus flexibly modulate the dorsal frontoparietal network, depending on the task requirements, with the infPreCS involved in the maintenance of complex feature-location binding and the SPL involved in the spatial updating of simple feature-location binding.

  6. Accelerated SPECT Monte Carlo Simulation Using Multiple Projection Sampling and Convolution-Based Forced Detection

    NASA Astrophysics Data System (ADS)

    Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.

    2008-02-01

    Monte Carlo (MC) is a well-utilized tool for simulating photon transport in single photon emission computed tomography (SPECT) due to its ability to accurately model physical processes of photon transport. As a consequence of this accuracy, it suffers from a relatively low detection efficiency and long computation time. One technique used to improve the speed of MC modeling is the effective and well-established variance reduction technique (VRT) known as forced detection (FD). With this method, photons are followed as they traverse the object under study but are then forced to travel in the direction of the detector surface, whereby they are detected at a single detector location. Another method, called convolution-based forced detection (CFD), is based on the fundamental idea of FD with the exception that detected photons are detected at multiple detector locations and determined with a distance-dependent blurring kernel. In order to further increase the speed of MC, a method named multiple projection convolution-based forced detection (MP-CFD) is presented. Rather than forcing photons to hit a single detector, the MP-CFD method follows the photon transport through the object but then, at each scatter site, forces the photon to interact with a number of detectors at a variety of angles surrounding the object. This way, it is possible to simulate all the projection images of a SPECT simulation in parallel, rather than as independent projections. The result of this is vastly improved simulation time as much of the computation load of simulating photon transport through the object is done only once for all projection angles. The results of the proposed MP-CFD method agrees well with the experimental data in measurements of point spread function (PSF), producing a correlation coefficient (r2) of 0.99 compared to experimental data. The speed of MP-CFD is shown to be about 60 times faster than a regular forced detection MC program with similar results.

  7. Lumber defect detection abilities of furniture rough mill employees

    Treesearch

    Henry A. Huber; Charles W. McMillin; John P. McKinney

    1985-01-01

    To cut parts from boards, rough mill employees must be able to see defects, calculate the proper location of cuts, manually position the board, and remain alert. The objective of this study was to evaluate how well rough mill employees perform the task of recognizing, locating, and identifying surface defects independent of the calculation and positioning process....

  8. UXO Detection and Characterization using new Berkeley UXO Discriminator (BUD)

    NASA Astrophysics Data System (ADS)

    Gasperikova, E.; Morrison, H. F.; Smith, J. T.; Becker, A.

    2006-05-01

    An optimally designed active electromagnetic system (AEM), Berkeley UXO Discriminator, BUD, has been developed for detection and characterization of UXO in the 20 mm to 150 mm size range. The system incorporates three orthogonal transmitters, and eight pairs of differenced receivers. The transmitter-receiver assembly together with the acquisition box, as well as the battery power and GPS receiver, is mounted on a small cart to assure system mobility. BUD not only detects the object itself but also quantitatively determines its size, shape, orientation, and metal content (ferrous or non-ferrous, mixed metals). Moreover, the principal polarizabilities and size of a metallic target can be determined from a single position of the BUD platform. The search for UXO is a two-step process. The object must first be detected and its location determined then the parameters of the object must be defined. A satisfactory classification scheme is one that determines the principal dipole polarizabilities of a target. While UXO objects have a single major polarizability (principal moment) coincident with the long axis of the object and two equal transverse polarizabilities, the scrap metal has all three principal moments entirely different. This description of the inherent polarizabilities of a target is a major advance in discriminating UXO from irregular scrap metal. Our results clearly show that BUD can resolve the intrinsic polarizabilities of a target and that there are very clear distinctions between symmetric intact UXO and irregular scrap metal. Target properties are determined by an inversion algorithm, which at any given time inverts the response to yield the location (x, y, z) of the target, its attitude and its principal polarizabilities (yielding an apparent aspect ratio). Signal-to-noise estimates (or measurements) are interpreted in this inversion to yield error estimates on the location, attitude and polarizabilities. This inversion at a succession of times provides the polarizabilities as a function of time, which can in turn yield the size, true aspect ratio and estimates of the conductivity and permeability of the target. The accuracy of these property estimates depends on the time window over which the polarizability measurements, and their accuracies, are known. Initial tests at a local site over a variety of test objects and inert UXOs showed excellent detection and characterization results within the predicted size-depth range. This research was funded by the U.S. Department of Defense under ESTCP Project # UX-0437.

  9. Eddy-Current Inspection of Ball Bearings

    NASA Technical Reports Server (NTRS)

    Bankston, B.

    1985-01-01

    Custom eddy-current probe locates surface anomalies. Low friction air cushion within cone allows ball to roll easily. Eddy current probe reliably detects surface and near-surface cracks, voids, and material anomalies in bearing balls or other spherical objects. Defects in ball surface detected by probe displayed on CRT and recorded on strip-chart recorder.

  10. Optical system for object detection and delineation in space

    NASA Astrophysics Data System (ADS)

    Handelman, Amir; Shwartz, Shoam; Donitza, Liad; Chaplanov, Loran

    2018-01-01

    Object recognition and delineation is an important task in many environments, such as in crime scenes and operating rooms. Marking evidence or surgical tools and attracting the attention of the surrounding staff to the marked objects can affect people's lives. We present an optical system comprising a camera, computer, and small laser projector that can detect and delineate objects in the environment. To prove the optical system's concept, we show that it can operate in a hypothetical crime scene in which a pistol is present and automatically recognize and segment it by various computer-vision algorithms. Based on such segmentation, the laser projector illuminates the actual boundaries of the pistol and thus allows the persons in the scene to comfortably locate and measure the pistol without holding any intermediator device, such as an augmented reality handheld device, glasses, or screens. Using additional optical devices, such as diffraction grating and a cylinder lens, the pistol size can be estimated. The exact location of the pistol in space remains static, even after its removal. Our optical system can be fixed or dynamically moved, making it suitable for various applications that require marking of objects in space.

  11. A robust approach towards unknown transformation, regional adjacency graphs, multigraph matching, segmentation video frames from unnamed aerial vehicles (UAV)

    NASA Astrophysics Data System (ADS)

    Gohatre, Umakant Bhaskar; Patil, Venkat P.

    2018-04-01

    In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.

  12. A novel framework for intelligent surveillance system based on abnormal human activity detection in academic environments.

    PubMed

    Al-Nawashi, Malek; Al-Hazaimeh, Obaida M; Saraee, Mohamad

    2017-01-01

    Abnormal activity detection plays a crucial role in surveillance applications, and a surveillance system that can perform robustly in an academic environment has become an urgent need. In this paper, we propose a novel framework for an automatic real-time video-based surveillance system which can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment. To develop our system, we have divided the work into three phases: preprocessing phase, abnormal human activity detection phase, and content-based image retrieval phase. For motion object detection, we used the temporal-differencing algorithm and then located the motions region using the Gaussian function. Furthermore, the shape model based on OMEGA equation was used as a filter for the detected objects (i.e., human and non-human). For object activities analysis, we evaluated and analyzed the human activities of the detected objects. We classified the human activities into two groups: normal activities and abnormal activities based on the support vector machine. The machine then provides an automatic warning in case of abnormal human activities. It also embeds a method to retrieve the detected object from the database for object recognition and identification using content-based image retrieval. Finally, a software-based simulation using MATLAB was performed and the results of the conducted experiments showed an excellent surveillance system that can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment with no human intervention.

  13. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.

    PubMed

    Ren, Shaoqing; He, Kaiming; Girshick, Ross; Sun, Jian

    2017-06-01

    State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available.

  14. Identification of Buried Objects in GPR Using Amplitude Modulated Signals Extracted from Multiresolution Monogenic Signal Analysis

    PubMed Central

    Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu

    2015-01-01

    It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146

  15. Automatic target recognition and detection in infrared imagery under cluttered background

    NASA Astrophysics Data System (ADS)

    Gundogdu, Erhan; Koç, Aykut; Alatan, A. Aydın.

    2017-10-01

    Visual object classification has long been studied in visible spectrum by utilizing conventional cameras. Since the labeled images has recently increased in number, it is possible to train deep Convolutional Neural Networks (CNN) with significant amount of parameters. As the infrared (IR) sensor technology has been improved during the last two decades, labeled images extracted from IR sensors have been started to be used for object detection and recognition tasks. We address the problem of infrared object recognition and detection by exploiting 15K images from the real-field with long-wave and mid-wave IR sensors. For feature learning, a stacked denoising autoencoder is trained in this IR dataset. To recognize the objects, the trained stacked denoising autoencoder is fine-tuned according to the binary classification loss of the target object. Once the training is completed, the test samples are propagated over the network, and the probability of the test sample belonging to a class is computed. Moreover, the trained classifier is utilized in a detect-by-classification method, where the classification is performed in a set of candidate object boxes and the maximum confidence score in a particular location is accepted as the score of the detected object. To decrease the computational complexity, the detection step at every frame is avoided by running an efficient correlation filter based tracker. The detection part is performed when the tracker confidence is below a pre-defined threshold. The experiments conducted on the real field images demonstrate that the proposed detection and tracking framework presents satisfactory results for detecting tanks under cluttered background.

  16. Modeling job sites in real time to improve safety during equipment operation

    NASA Astrophysics Data System (ADS)

    Caldas, Carlos H.; Haas, Carl T.; Liapi, Katherine A.; Teizer, Jochen

    2006-03-01

    Real-time three-dimensional (3D) modeling of work zones has received an increasing interest to perform equipment operation faster, safer and more precisely. In addition, hazardous job site environment like they exist on construction sites ask for new devices which can rapidly and actively model static and dynamic objects. Flash LADAR (Laser Detection and Ranging) cameras are one of the recent technology developments which allow rapid spatial data acquisition of scenes. Algorithms that can process and interpret the output of such enabling technologies into threedimensional models have the potential to significantly improve work processes. One particular important application is modeling the location and path of objects in the trajectory of heavy construction equipment navigation. Detecting and mapping people, materials and equipment into a three-dimensional computer model allows analyzing the location, path, and can limit or restrict access to hazardous areas. This paper presents experiments and results of a real-time three-dimensional modeling technique to detect static and moving objects within the field of view of a high-frame update rate laser range scanning device. Applications related to heavy equipment operations on transportation and construction job sites are specified.

  17. Discovery of a Be/X-Ray Binary Consistent with the Location of GRO J2058+42

    NASA Technical Reports Server (NTRS)

    Wilson, Colleen; Weisskopf, Martin; Finger, Mark H.; Coe, M. J.; Greiner, Jochen; Reig, Pablo; Papamastorakis, Giannis

    2005-01-01

    GRO J2058+42 is a 195 s transient X-ray pulsar discovered in 1995 with BATSE. In 1996, RXTE located GRO J2058+42 to a 90% confidence error circle with a 4 radius. On 2004 February 20, the region including the error circle was observed with Chandra ACIS-I. No X-ray sources were detected within the error circle; however, two faint sources were detected in the ACIS-I field of view. We obtained optical observations of the brightest object, CXOU J205847.5+414637, which had about 64 X-ray counts and was just 013 outside the error circle. The optical spectrum contains a strong Ha line and corresponds to an inhued object in the Two Micron All Sky Survey catalog, indicating a Be/X-ray binary system. Pulsations were not detected in the Chandra observations, but similar flux variations and distance estimates suggest that CXOU J205847.5+414637 and GRO J2058+42 are the same object. We present results from the Chandra observation, optical observations, new and previously unreported RXTE observations, and a reanalysis of a ROSAT observation.

  18. Magnetic imager and method

    DOEpatents

    Powell, J.; Reich, M.; Danby, G.

    1997-07-22

    A magnetic imager includes a generator for practicing a method of applying a background magnetic field over a concealed object, with the object being effective to locally perturb the background field. The imager also includes a sensor for measuring perturbations of the background field to detect the object. In one embodiment, the background field is applied quasi-statically. And, the magnitude or rate of change of the perturbations may be measured for determining location, size, and/or condition of the object. 25 figs.

  19. Interferometric angle monitor

    NASA Technical Reports Server (NTRS)

    Minott, P. O. (Inventor)

    1983-01-01

    Two mutually coherent light beams formed from a single monochromatic light source were directed to a reflecting surface of a rotatable object. They were reflected into an imaging optical lens having a focal plane optically at infinity. A series of interference fringes were formed in the focal plane which were translated linearly in response to angular rotation of the object. Photodetectors were located adjacent the focal plane to detect the fringe translation and output a signal in response to the translation. The signal was fed to a signal processor which was adapted to count the number of fringes detected and develop a measure of the angular rotation and direction of the object.

  20. A framework to determine the locations of the environmental monitoring in an estuary of the Yellow Sea.

    PubMed

    Kim, Nam-Hoon; Hwang, Jin Hwan; Cho, Jaegab; Kim, Jae Seong

    2018-06-04

    The characteristics of an estuary are determined by various factors as like as tide, wave, river discharge, etc. which also control the water quality of the estuary. Therefore, detecting the changes of characteristics is critical in managing the environmental qualities and pollution and so the locations of monitoring should be selected carefully. The present study proposes a framework to deploy the monitoring systems based on a graphical method of the spatial and temporal optimizations. With the well-validated numerical simulation results, the monitoring locations are determined to capture the changes of water qualities and pollutants depending on the variations of tide, current and freshwater discharge. The deployment strategy to find the appropriate monitoring locations is designed with the constrained optimization method, which finds solutions by constraining the objective function into the feasible regions. The objective and constrained functions are constructed with the interpolation technique such as objective analysis. Even with the smaller number of the monitoring locations, the present method performs well equivalently to the arbitrarily and evenly deployed monitoring system. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Risk Factors Detection for Strategic Importance Objectives in Littoral Areas

    NASA Astrophysics Data System (ADS)

    Slămnoiu, G.; Radu, O.; Roşca, V.; Pascu, C.; Surdu, G.; Curcă, E.; Damian, R. G.; Rădulescu, A.

    2017-06-01

    With the invention and development of underwater explosive devices the need to neutralize them has also appeared, both for enemy and for own devices once conflicts are finished. The fight against active underwater explosive devices is a very complicated action that requires a very careful approach. Also, in the current context, strategic importance objectives located in the littoral areas can also become targets for divers or fast boats (suicidal actions).The system for detection, localization, tracking and identification of risk factors for strategic importance objectives in littoral areas has as one of its components an AUV and a hydro-acoustic sub-system for determining the ‘fingerprints’ of potential targets. The overall system will provide support for main missions such as underwater environment surveillance (detection, monitoring) in harbor areas and around other coast objectives, ship anchorage areas, mandatory pass points and also provide warnings about the presence of underwater and surface dangers in the interest areas.

  2. A Kalman-Filter-Based Common Algorithm Approach for Object Detection in Surgery Scene to Assist Surgeon's Situation Awareness in Robot-Assisted Laparoscopic Surgery

    PubMed Central

    2018-01-01

    Although the use of the surgical robot is rapidly expanding for various medical treatments, there still exist safety issues and concerns about robot-assisted surgeries due to limited vision through a laparoscope, which may cause compromised situation awareness and surgical errors requiring rapid emergency conversion to open surgery. To assist surgeon's situation awareness and preventive emergency response, this study proposes situation information guidance through a vision-based common algorithm architecture for automatic detection and tracking of intraoperative hemorrhage and surgical instruments. The proposed common architecture comprises the location of the object of interest using feature texture, morphological information, and the tracking of the object based on Kalman filter for robustness with reduced error. The average recall and precision of the instrument detection in four prostate surgery videos were 96% and 86%, and the accuracy of the hemorrhage detection in two prostate surgery videos was 98%. Results demonstrate the robustness of the automatic intraoperative object detection and tracking which can be used to enhance the surgeon's preventive state recognition during robot-assisted surgery. PMID:29854366

  3. Structural Health Monitoring and Impact Detection Using Neural Networks for Damage Characterization

    NASA Technical Reports Server (NTRS)

    Ross, Richard W.

    2006-01-01

    Detection of damage due to foreign object impact is an important factor in the development of new aerospace vehicles. Acoustic waves generated on impact can be detected using a set of piezoelectric transducers, and the location of impact can be determined by triangulation based on the differences in the arrival time of the waves at each of the sensors. These sensors generate electrical signals in response to mechanical motion resulting from the impact as well as from natural vibrations. Due to electrical noise and mechanical vibration, accurately determining these time differentials can be challenging, and even small measurement inaccuracies can lead to significant errors in the computed damage location. Wavelet transforms are used to analyze the signals at multiple levels of detail, allowing the signals resulting from the impact to be isolated from ambient electromechanical noise. Data extracted from these transformed signals are input to an artificial neural network to aid in identifying the moment of impact from the transformed signals. By distinguishing which of the signal components are resultant from the impact and which are characteristic of noise and normal aerodynamic loads, the time differentials as well as the location of damage can be accurately assessed. The combination of wavelet transformations and neural network processing results in an efficient and accurate approach for passive in-flight detection of foreign object damage.

  4. Beyond scene gist: Objects guide search more than scene background.

    PubMed

    Koehler, Kathryn; Eckstein, Miguel P

    2017-06-01

    Although the facilitation of visual search by contextual information is well established, there is little understanding of the independent contributions of different types of contextual cues in scenes. Here we manipulated 3 types of contextual information: object co-occurrence, multiple object configurations, and background category. We isolated the benefits of each contextual cue to target detectability, its impact on decision bias, confidence, and the guidance of eye movements. We find that object-based information guides eye movements and facilitates perceptual judgments more than scene background. The degree of guidance and facilitation of each contextual cue can be related to its inherent informativeness about the target spatial location as measured by human explicit judgments about likely target locations. Our results improve the understanding of the contributions of distinct contextual scene components to search and suggest that the brain's utilization of cues to guide eye movements is linked to the cue's informativeness about the target's location. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. A robust sub-pixel edge detection method of infrared image based on tremor-based retinal receptive field model

    NASA Astrophysics Data System (ADS)

    Gao, Kun; Yang, Hu; Chen, Xiaomei; Ni, Guoqiang

    2008-03-01

    Because of complex thermal objects in an infrared image, the prevalent image edge detection operators are often suitable for a certain scene and extract too wide edges sometimes. From a biological point of view, the image edge detection operators work reliably when assuming a convolution-based receptive field architecture. A DoG (Difference-of- Gaussians) model filter based on ON-center retinal ganglion cell receptive field architecture with artificial eye tremors introduced is proposed for the image contour detection. Aiming at the blurred edges of an infrared image, the subsequent orthogonal polynomial interpolation and sub-pixel level edge detection in rough edge pixel neighborhood is adopted to locate the foregoing rough edges in sub-pixel level. Numerical simulations show that this method can locate the target edge accurately and robustly.

  6. Fast object reconstruction in block-based compressive low-light-level imaging

    NASA Astrophysics Data System (ADS)

    Ke, Jun; Sui, Dong; Wei, Ping

    2014-11-01

    In this paper we propose a simply yet effective and efficient method for long-term object tracking. Different from traditional visual tracking method which mainly depends on frame-to-frame correspondence, we combine high-level semantic information with low-level correspondences. Our framework is formulated in a confidence selection framework, which allows our system to recover from drift and partly deal with occlusion problem. To summarize, our algorithm can be roughly decomposed in a initialization stage and a tracking stage. In the initialization stage, an offline classifier is trained to get the object appearance information in category level. When the video stream is coming, the pre-trained offline classifier is used for detecting the potential target and initializing the tracking stage. In the tracking stage, it consists of three parts which are online tracking part, offline tracking part and confidence judgment part. Online tracking part captures the specific target appearance information while detection part localizes the object based on the pre-trained offline classifier. Since there is no data dependence between online tracking and offline detection, these two parts are running in parallel to significantly improve the processing speed. A confidence selection mechanism is proposed to optimize the object location. Besides, we also propose a simple mechanism to judge the absence of the object. If the target is lost, the pre-trained offline classifier is utilized to re-initialize the whole algorithm as long as the target is re-located. During experiment, we evaluate our method on several challenging video sequences and demonstrate competitive results.

  7. Preprocessing of A-scan GPR data based on energy features

    NASA Astrophysics Data System (ADS)

    Dogan, Mesut; Turhan-Sayan, Gonul

    2016-05-01

    There is an increasing demand for noninvasive real-time detection and classification of buried objects in various civil and military applications. The problem of detection and annihilation of landmines is particularly important due to strong safety concerns. The requirement for a fast real-time decision process is as important as the requirements for high detection rates and low false alarm rates. In this paper, we introduce and demonstrate a computationally simple, timeefficient, energy-based preprocessing approach that can be used in ground penetrating radar (GPR) applications to eliminate reflections from the air-ground boundary and to locate the buried objects, simultaneously, at one easy step. The instantaneous power signals, the total energy values and the cumulative energy curves are extracted from the A-scan GPR data. The cumulative energy curves, in particular, are shown to be useful to detect the presence and location of buried objects in a fast and simple way while preserving the spectral content of the original A-scan data for further steps of physics-based target classification. The proposed method is demonstrated using the GPR data collected at the facilities of IPA Defense, Ankara at outdoor test lanes. Cylindrically shaped plastic containers were buried in fine-medium sand to simulate buried landmines. These plastic containers were half-filled by ammonium nitrate including metal pins. Results of this pilot study are demonstrated to be highly promising to motivate further research for the use of energy-based preprocessing features in landmine detection problem.

  8. Magnetic imager and method

    DOEpatents

    Powell, James; Reich, Morris; Danby, Gordon

    1997-07-22

    A magnetic imager 10 includes a generator 18 for practicing a method of applying a background magnetic field over a concealed object, with the object being effective to locally perturb the background field. The imager 10 also includes a sensor 20 for measuring perturbations of the background field to detect the object. In one embodiment, the background field is applied quasi-statically. And, the magnitude or rate of change of the perturbations may be measured for determining location, size, and/or condition of the object.

  9. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision.

    PubMed

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks.

  10. Navigation and Self-Semantic Location of Drones in Indoor Environments by Combining the Visual Bug Algorithm and Entropy-Based Vision

    PubMed Central

    Maravall, Darío; de Lope, Javier; Fuentes, Juan P.

    2017-01-01

    We introduce a hybrid algorithm for the self-semantic location and autonomous navigation of robots using entropy-based vision and visual topological maps. In visual topological maps the visual landmarks are considered as leave points for guiding the robot to reach a target point (robot homing) in indoor environments. These visual landmarks are defined from images of relevant objects or characteristic scenes in the environment. The entropy of an image is directly related to the presence of a unique object or the presence of several different objects inside it: the lower the entropy the higher the probability of containing a single object inside it and, conversely, the higher the entropy the higher the probability of containing several objects inside it. Consequently, we propose the use of the entropy of images captured by the robot not only for the landmark searching and detection but also for obstacle avoidance. If the detected object corresponds to a landmark, the robot uses the suggestions stored in the visual topological map to reach the next landmark or to finish the mission. Otherwise, the robot considers the object as an obstacle and starts a collision avoidance maneuver. In order to validate the proposal we have defined an experimental framework in which the visual bug algorithm is used by an Unmanned Aerial Vehicle (UAV) in typical indoor navigation tasks. PMID:28900394

  11. Using pattern recognition to automatically localize reflection hyperbolas in data from ground penetrating radar

    NASA Astrophysics Data System (ADS)

    Maas, Christian; Schmalzl, Jörg

    2013-08-01

    Ground Penetrating Radar (GPR) is used for the localization of supply lines, land mines, pipes and many other buried objects. These objects can be recognized in the recorded data as reflection hyperbolas with a typical shape depending on depth and material of the object and the surrounding material. To obtain the parameters, the shape of the hyperbola has to be fitted. In the last years several methods were developed to automate this task during post-processing. In this paper we show another approach for the automated localization of reflection hyperbolas in GPR data by solving a pattern recognition problem in grayscale images. In contrast to other methods our detection program is also able to immediately mark potential objects in real-time. For this task we use a version of the Viola-Jones learning algorithm, which is part of the open source library "OpenCV". This algorithm was initially developed for face recognition, but can be adapted to any other simple shape. In our program it is used to narrow down the location of reflection hyperbolas to certain areas in the GPR data. In order to extract the exact location and the velocity of the hyperbolas we apply a simple Hough Transform for hyperbolas. Because the Viola-Jones Algorithm reduces the input for the computational expensive Hough Transform dramatically the detection system can also be implemented on normal field computers, so on-site application is possible. The developed detection system shows promising results and detection rates in unprocessed radargrams. In order to improve the detection results and apply the program to noisy radar images more data of different GPR systems as input for the learning algorithm is necessary.

  12. A fast automatic target detection method for detecting ships in infrared scenes

    NASA Astrophysics Data System (ADS)

    Özertem, Kemal Arda

    2016-05-01

    Automatic target detection in infrared scenes is a vital task for many application areas like defense, security and border surveillance. For anti-ship missiles, having a fast and robust ship detection algorithm is crucial for overall system performance. In this paper, a straight-forward yet effective ship detection method for infrared scenes is introduced. First, morphological grayscale reconstruction is applied to the input image, followed by an automatic thresholding onto the suppressed image. For the segmentation step, connected component analysis is employed to obtain target candidate regions. At this point, it can be realized that the detection is defenseless to outliers like small objects with relatively high intensity values or the clouds. To deal with this drawback, a post-processing stage is introduced. For the post-processing stage, two different methods are used. First, noisy detection results are rejected with respect to target size. Second, the waterline is detected by using Hough transform and the detection results that are located above the waterline with a small margin are rejected. After post-processing stage, there are still undesired holes remaining, which cause to detect one object as multi objects or not to detect an object as a whole. To improve the detection performance, another automatic thresholding is implemented only to target candidate regions. Finally, two detection results are fused and post-processing stage is repeated to obtain final detection result. The performance of overall methodology is tested with real world infrared test data.

  13. Tracking Object Existence From an Autonomous Patrol Vehicle

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Scharenbroich, Lucas

    2011-01-01

    An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the uncertainty arising from errors in sensors and upstream processes. However, traditional target tracking methods typically assume a stationary detection volume of interest, whereas in this case, one must make adjustments for being able to see only a small portion of the region of interest and understand when an alert situation has occurred. To track object existence inside and outside the vehicle's sensor range, a probability of existence was defined for each hypothesized object, and this value was updated at every time step in a Bayesian manner based on expected characteristics of the sensor and object and whether that object has been detected in the most recent time step. Then, this value feeds into a sequential probability ratio test (SPRT) to determine the status of the object (suspected, confirmed, or deleted). Alerts are sent upon selected status transitions. Additionally, in order to track objects that move in and out of sensor range and update the probability of existence appropriately a variable probability detection has been defined and the hypothesis probability equations have been re-derived to accommodate this change. Unsupervised object tracking is a pervasive issue in automated perception systems. This work could apply to any mobile platform (ground vehicle, sea vessel, air vehicle, or orbiter) that intermittently revisits regions of interest and needs to determine whether anything interesting has changed.

  14. Development of an optical fiber interferometer for detection of surface flaws in aluminum

    NASA Technical Reports Server (NTRS)

    Gilbert, John A.

    1991-01-01

    The main objective was to demonstrate the potential of using an optical fiber interferometer (OFI) to detect surface flaws in aluminum samples. Standard ultrasonic excitation was used to generate Rayleigh surface waves. After the waves interacted with a defect, the modified responses were detected using the OFI and the results were analyzed for time-of-flight and frequency content to predict the size and location of the flaws.

  15. Object-based implicit learning in visual search: perceptual segmentation constrains contextual cueing.

    PubMed

    Conci, Markus; Müller, Hermann J; von Mühlenen, Adrian

    2013-07-09

    In visual search, detection of a target is faster when it is presented within a spatial layout of repeatedly encountered nontarget items, indicating that contextual invariances can guide selective attention (contextual cueing; Chun & Jiang, 1998). However, perceptual regularities may interfere with contextual learning; for instance, no contextual facilitation occurs when four nontargets form a square-shaped grouping, even though the square location predicts the target location (Conci & von Mühlenen, 2009). Here, we further investigated potential causes for this interference-effect: We show that contextual cueing can reliably occur for targets located within the region of a segmented object, but not for targets presented outside of the object's boundaries. Four experiments demonstrate an object-based facilitation in contextual cueing, with a modulation of context-based learning by relatively subtle grouping cues including closure, symmetry, and spatial regularity. Moreover, the lack of contextual cueing for targets located outside the segmented region was due to an absence of (latent) learning of contextual layouts, rather than due to an attentional bias towards the grouped region. Taken together, these results indicate that perceptual segmentation provides a basic structure within which contextual scene regularities are acquired. This in turn argues that contextual learning is constrained by object-based selection.

  16. Buried object remote detection technology for law enforcement

    NASA Astrophysics Data System (ADS)

    del Grande, Nancy K.; Clark, Gregory A.; Durbin, Philip F.; Fields, David J.; Hernandez, Jose E.; Sherwood, Robert J.

    1991-08-01

    A precise airborne temperature-sensing technology to detect buried objects for use by law enforcement is developed. Demonstrations have imaged the sites of buried foundations, walls and trenches; mapped underground waterways and aquifers; and been used to locate underground military objects. The methodology is incorporated in a commercially available, high signal-to-noise, dual-band infrared scanner with real-time, 12-bit digital image processing software and display. The method creates color-coded images based on surface temperature variations of 0.2 degree(s)C. Unlike other less-sensitive methods, it maps true (corrected) temperatures by removing the (decoupled) surface emissivity mask equivalent to 1 degree(s)C or 2 degree(s)C; this mask hinders interpretation of apparent (blackbody) temperatures. Once removed, it is possible to identify surface temperature patterns from small diffusivity changes at buried object sites which heat and cool differently from their surroundings. Objects made of different materials and buried at different depths are identified by their unique spectral, spatial, thermal, temporal, emissivity and diffusivity signatures. The authors have successfully located the sites of buried (inert) simulated land mines 0.1 to 0.2 m deep; sod-covered rock pathways alongside dry ditches, deeper than 0.2 m; pavement covered burial trenches and cemetery structures as deep as 0.8 m; and aquifers more than 6 m and less than 60 m deep. The technology could be adapted for drug interdiction and pollution control. For the former, buried tunnels, underground structures built beneath typical surface structures, roof-tops disguised by jungle canopies, and covered containers used for contraband would be located. For the latter, buried waste containers, sludge migration pathways from faulty containers, and the juxtaposition of groundwater channels, if present, nearby, would be depicted. The precise airborne temperature-sensing technology has a promising potential to detect underground epicenters of smuggling and pollution.

  17. Methods for identification and verification using vacuum XRF system

    NASA Technical Reports Server (NTRS)

    Kaiser, Bruce (Inventor); Schramm, Fred (Inventor)

    2005-01-01

    Apparatus and methods in which one or more elemental taggants that are intrinsically located in an object are detected by x-ray fluorescence analysis under vacuum conditions to identify or verify the object's elemental content for elements with lower atomic numbers. By using x-ray fluorescence analysis, the apparatus and methods of the invention are simple and easy to use, as well as provide detection by a non line-of-sight method to establish the origin of objects, as well as their point of manufacture, authenticity, verification, security, and the presence of impurities. The invention is extremely advantageous because it provides the capability to measure lower atomic number elements in the field with a portable instrument.

  18. Data, data everywhere: detecting spatial patterns in fine-scale ecological information collected across a continent

    Treesearch

    Kevin M. Potter; Frank H. Koch; Christopher M. Oswalt; Basil V. Iannone

    2016-01-01

    Context Fine-scale ecological data collected across broad regions are becoming increasingly available. Appropriate geographic analyses of these data can help identify locations of ecological concern. Objectives We present one such approach, spatial association of scalable hexagons (SASH), whichidentifies locations where ecological phenomena occur at greater...

  19. Object Detection Applied to Indoor Environments for Mobile Robot Navigation.

    PubMed

    Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón

    2016-07-28

    To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests.

  20. Object Detection Applied to Indoor Environments for Mobile Robot Navigation

    PubMed Central

    Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón

    2016-01-01

    To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests. PMID:27483264

  1. Geometrical characterization of fluorescently labelled surfaces from noisy 3D microscopy data.

    PubMed

    Shelton, Elijah; Serwane, Friedhelm; Campàs, Otger

    2018-03-01

    Modern fluorescence microscopy enables fast 3D imaging of biological and inert systems alike. In many studies, it is important to detect the surface of objects and quantitatively characterize its local geometry, including its mean curvature. We present a fully automated algorithm to determine the location and curvatures of an object from 3D fluorescence images, such as those obtained using confocal or light-sheet microscopy. The algorithm aims at reconstructing surface labelled objects with spherical topology and mild deformations from the spherical geometry with high accuracy, rather than reconstructing arbitrarily deformed objects with lower fidelity. Using both synthetic data with known geometrical characteristics and experimental data of spherical objects, we characterize the algorithm's accuracy over the range of conditions and parameters typically encountered in 3D fluorescence imaging. We show that the algorithm can detect the location of the surface and obtain a map of local mean curvatures with relative errors typically below 2% and 20%, respectively, even in the presence of substantial levels of noise. Finally, we apply this algorithm to analyse the shape and curvature map of fluorescently labelled oil droplets embedded within multicellular aggregates and deformed by cellular forces. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  2. Introduction to the Special Issue on Visual Working Memory

    PubMed Central

    Wolfe, Jeremy M

    2014-01-01

    Objects are not represented individually in visual working memory (VWM), but in relation to the contextual information provided by other memorized objects. We studied whether the contextual information provided by the spatial configuration of all memorized objects is viewpoint-dependent. We ran two experiments asking participants to detect changes in locations between memory and probe for one object highlighted in the probe image. We manipulated the changes in viewpoint between memory and probe (Exp. 1: 0°, 30°, 60°; Exp. 2: 0°, 60°), as well as the spatial configuration visible in the probe image (Exp. 1: full configuration, partial configuration; Exp. 2: full configuration, no configuration). Location change detection was higher with the full spatial configuration than with the partial configuration or with no spatial configuration at viewpoint changes of 0°, thus replicating previous findings on the nonindependent representations of individual objects in VWM. Most importantly, the effect of spatial configurations decreased with increasing viewpoint changes, suggesting a viewpoint-dependent representation of contextual information in VWM. We discuss these findings within the context of this special issue, in particular whether research performed within the slots-versus-resources debate and research on the effects of contextual information might focus on two different storage systems within VWM. PMID:25341647

  3. A formal theory of feature binding in object perception.

    PubMed

    Ashby, F G; Prinzmetal, W; Ivry, R; Maddox, W T

    1996-01-01

    Visual objects are perceived correctly only if their features are identified and then bound together. Illusory conjunctions result when feature identification is correct but an error occurs during feature binding. A new model is proposed that assumes feature binding errors occur because of uncertainty about the location of visual features. This model accounted for data from 2 new experiments better than a model derived from A. M. Treisman and H. Schmidt's (1982) feature integration theory. The traditional method for detecting the occurrence of true illusory conjunctions is shown to be fundamentally flawed. A reexamination of 2 previous studies provided new insights into the role of attention and location information in object perception and a reinterpretation of the deficits in patients who exhibit attentional disorders.

  4. Moving Object Detection Using Scanning Camera on a High-Precision Intelligent Holder.

    PubMed

    Chen, Shuoyang; Xu, Tingfa; Li, Daqun; Zhang, Jizhou; Jiang, Shenwang

    2016-10-21

    During the process of moving object detection in an intelligent visual surveillance system, a scenario with complex background is sure to appear. The traditional methods, such as "frame difference" and "optical flow", may not able to deal with the problem very well. In such scenarios, we use a modified algorithm to do the background modeling work. In this paper, we use edge detection to get an edge difference image just to enhance the ability of resistance illumination variation. Then we use a "multi-block temporal-analyzing LBP (Local Binary Pattern)" algorithm to do the segmentation. In the end, a connected component is used to locate the object. We also produce a hardware platform, the core of which consists of the DSP (Digital Signal Processor) and FPGA (Field Programmable Gate Array) platforms and the high-precision intelligent holder.

  5. Modeling and query the uncertainty of network constrained moving objects based on RFID data

    NASA Astrophysics Data System (ADS)

    Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie

    2007-06-01

    The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.

  6. Whisker Contact Detection of Rodents Based on Slow and Fast Mechanical Inputs

    PubMed Central

    Claverie, Laure N.; Boubenec, Yves; Debrégeas, Georges; Prevost, Alexis M.; Wandersman, Elie

    2017-01-01

    Rodents use their whiskers to locate nearby objects with an extreme precision. To perform such tasks, they need to detect whisker/object contacts with a high temporal accuracy. This contact detection is conveyed by classes of mechanoreceptors whose neural activity is sensitive to either slow or fast time varying mechanical stresses acting at the base of the whiskers. We developed a biomimetic approach to separate and characterize slow quasi-static and fast vibrational stress signals acting on a whisker base in realistic exploratory phases, using experiments on both real and artificial whiskers. Both slow and fast mechanical inputs are successfully captured using a mechanical model of the whisker. We present and discuss consequences of the whisking process in purely mechanical terms and hypothesize that free whisking in air sets a mechanical threshold for contact detection. The time resolution and robustness of the contact detection strategies based on either slow or fast stress signals are determined. Contact detection based on the vibrational signal is faster and more robust to exploratory conditions than the slow quasi-static component, although both slow/fast components allow localizing the object. PMID:28119582

  7. Real-time model-based vision system for object acquisition and tracking

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd

    1987-01-01

    A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.

  8. Electromagnetic geophysical tunnel detection experiments---San Xavier Mine Facility, Tucson, Arizona

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wayland, J.R.; Lee, D.O.; Shope, S.M.

    1991-02-01

    The objective of this work is to develop a general method for remotely sensing the presence of tunneling activities using one or more boreholes and a combination of surface sources. New techniques for tunnel detection and location of tunnels containing no metal and of tunnels containing only a small diameter wire have been experimentally demonstrated. A downhole magnetic dipole and surface loop sources were used as the current sources. The presence of a tunnel causes a subsurface scattering of the field components created by the source. Ratioing of the measured responses enhanced the detection and location capability over that producedmore » by each of the sources individually. 4 refs., 18 figs., 2 tabs.« less

  9. A Web Browsing System by Eye-gaze Input

    NASA Astrophysics Data System (ADS)

    Abe, Kiyohiko; Owada, Kosuke; Ohi, Shoichi; Ohyama, Minoru

    We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. We also developed the platform for eye-gaze input based on our system. In this paper, we propose a new web browsing system for physically disabled computer users as an application of the platform for eye-gaze input. The proposed web browsing system uses a method of direct indicator selection. The method categorizes indicators by their function. These indicators are hierarchized relations; users can select the felicitous function by switching indicators group. This system also analyzes the location of selectable object on web page, such as hyperlink, radio button, edit box, etc. This system stores the locations of these objects, in other words, the mouse cursor skips to the object of candidate input. Therefore it enables web browsing at a faster pace.

  10. Shape-based human detection for threat assessment

    NASA Astrophysics Data System (ADS)

    Lee, Dah-Jye; Zhan, Pengcheng; Thomas, Aaron; Schoenberger, Robert B.

    2004-07-01

    Detection of intrusions for early threat assessment requires the capability of distinguishing whether the intrusion is a human, an animal, or other objects. Most low-cost security systems use simple electronic motion detection sensors to monitor motion or the location of objects within the perimeter. Although cost effective, these systems suffer from high rates of false alarm, especially when monitoring open environments. Any moving objects including animals can falsely trigger the security system. Other security systems that utilize video equipment require human interpretation of the scene in order to make real-time threat assessment. Shape-based human detection technique has been developed for accurate early threat assessments for open and remote environment. Potential threats are isolated from the static background scene using differential motion analysis and contours of the intruding objects are extracted for shape analysis. Contour points are simplified by removing redundant points connecting short and straight line segments and preserving only those with shape significance. Contours are represented in tangent space for comparison with shapes stored in database. Power cepstrum technique has been developed to search for the best matched contour in database and to distinguish a human from other objects from different viewing angles and distances.

  11. Increased Fire and Toxic Contaminant Detection Responsibility by Use of Distributed, Aspirating Sensors

    NASA Technical Reports Server (NTRS)

    Youngblood, Wallace W.

    1990-01-01

    Viewgraphs of increased fire and toxic contaminant detection responsivity by use of distributed, aspirating sensors for space station are presented. Objectives of the concept described are (1) to enhance fire and toxic contaminant detection responsivity in habitable regions of space station; (2) to reduce system weight and complexity through centralized detector/monitor systems; (3) to increase fire signature information from selected locations in a space station module; and (4) to reduce false alarms.

  12. Detection of zone of seepage beneath earthfill dam

    DOT National Transportation Integrated Search

    2008-02-01

    MST proposes to acquire resistivity and self-potential data at the Lake Sherwood earth fill dam site. These geophysical data will be processed, analyzed and interpreted with the objective of locating and mapping seepage pathways that might compromise...

  13. Shallow Water Imaging Sonar System for Environmental Surveying Final Report CRADA No. TC-1130-95

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ng, L. C.; Rosenbaum, H.

    The scope of this research is to develop a shallow water sonar system designed to detect and map the location of objects such as hazardous wastes or discarded ordnance in coastal waters. The system will use high frequency wide-bandwidth imaging sonar, mounted on a moving platform towed behind a boat, to detect and identify objects on the sea bottom. Resolved images can be obtained even if the targets are buried in an overlayer of silt. Reference 1 ( also attached) summarized the statement of work and the scope of collaboration.

  14. Ammonia downstream from HH 80 North

    NASA Technical Reports Server (NTRS)

    Girart, Jose M.; Rodriguez, Luis F.; Anglada, Guillem; Estalella, Robert; Torrelles, Jose, M.; Marti, Josep; Pena, Miriam; Ayala, Sandra; Curiel, Salvador; Noriega-Crespo, Alberto

    1994-01-01

    HH 80-81 are two optically visible Herbig-Haro (HH) objects located about 5 minutes south of their exciting source IRAS 18162-2048. Displaced symmetrically to the north of this luminous IRAS source, a possible HH counterpart was recently detected as a radio continuum source with the very large array (VLA). This radio source, HH 80 North, has been proposed to be a member of the Herbig-Haro class since its centimeter flux density, angular size, spectral index, and morphology are all similar to those of HH 80. However, no object has been detected at optical wavelengths at the position of HH 80 North, possibly because of high extinction, and the confirmation of the radio continuum source as an HH object has not been possible. In the prototypical Herbig-Haro objects HH 1 and 2, ammonia emission has been detected downstream of the flow in both objects. This detection has been intepreted as a result of an enhancement in the ammonia emission produced by the radiation field of the shock associated with the HH object. In this Letter we report the detection of the (1,1) and (2,2) inversion transitions of ammonia downstream HH 80 North. This detection gives strong suppport to the interpretation of HH 80 North as a heavily obscured HH object. In addition, we suggest that ammonia emission may be a tracer of embedded Herbig-Haro objects in other regions of star formation. A 60 micrometer IRAS source could be associated with HH 80 North and with the ammonia condensation. A tentative explanation for the far-infrared emission as arising in dust heated by their optical and UV radiation of the HH object is presented.

  15. Location cue validity affects inhibition of return of visual processing.

    PubMed

    Wright, R D; Richard, C M

    2000-01-01

    Inhibition-of-return is the process by which visual search for an object positioned among others is biased toward novel rather than previously inspected items. It is thought to occur automatically and to increase search efficiency. We examined this phenomenon by studying the facilitative and inhibitory effects of location cueing on target-detection response times in a search task. The results indicated that facilitation was a reflexive consequence of cueing whereas inhibition appeared to depend on cue informativeness. More specifically, the inhibition-of-return effect occurred only when the cue provided no information about the impending target's location. We suggest that the results are consistent with the notion of two levels of visual processing. The first involves rapid and reflexive operations that underlie the facilitative effects of location cueing on target detection. The second involves a rapid but goal-driven inhibition procedure that the perceiver can invoke if doing so will enhance visual search performance.

  16. Neural Correlates of Divided Attention in Natural Scenes.

    PubMed

    Fagioli, Sabrina; Macaluso, Emiliano

    2016-09-01

    Individuals are able to split attention between separate locations, but divided spatial attention incurs the additional requirement of monitoring multiple streams of information. Here, we investigated divided attention using photos of natural scenes, where the rapid categorization of familiar objects and prior knowledge about the likely positions of objects in the real world might affect the interplay between these spatial and nonspatial factors. Sixteen participants underwent fMRI during an object detection task. They were presented with scenes containing either a person or a car, located on the left or right side of the photo. Participants monitored either one or both object categories, in one or both visual hemifields. First, we investigated the interplay between spatial and nonspatial attention by comparing conditions of divided attention between categories and/or locations. We then assessed the contribution of top-down processes versus stimulus-driven signals by separately testing the effects of divided attention in target and nontarget trials. The results revealed activation of a bilateral frontoparietal network when dividing attention between the two object categories versus attending to a single category but no main effect of dividing attention between spatial locations. Within this network, the left dorsal premotor cortex and the left intraparietal sulcus were found to combine task- and stimulus-related signals. These regions showed maximal activation when participants monitored two categories at spatially separate locations and the scene included a nontarget object. We conclude that the dorsal frontoparietal cortex integrates top-down and bottom-up signals in the presence of distractors during divided attention in real-world scenes.

  17. Using goal- and grip-related information for understanding the correctness of other's actions: an ERP study.

    PubMed

    van Elk, Michiel; Bousardt, Roel; Bekkering, Harold; van Schie, Hein T

    2012-01-01

    Detecting errors in other's actions is of pivotal importance for joint action, competitive behavior and observational learning. Although many studies have focused on the neural mechanisms involved in detecting low-level errors, relatively little is known about error-detection in everyday situations. The present study aimed to identify the functional and neural mechanisms whereby we understand the correctness of other's actions involving well-known objects (e.g. pouring coffee in a cup). Participants observed action sequences in which the correctness of the object grasped and the grip applied to a pair of objects were independently manipulated. Observation of object violations (e.g. grasping the empty cup instead of the coffee pot) resulted in a stronger P3-effect than observation of grip errors (e.g. grasping the coffee pot at the upper part instead of the handle), likely reflecting a reorienting response, directing attention to the relevant location. Following the P3-effect, a parietal slow wave positivity was observed that persisted for grip-errors, likely reflecting the detection of an incorrect hand-object interaction. These findings provide new insight in the functional significance of the neurophysiological markers associated with the observation of incorrect actions and suggest that the P3-effect and the subsequent parietal slow wave positivity may reflect the detection of errors at different levels in the action hierarchy. Thereby this study elucidates the cognitive processes that support the detection of action violations in the selection of objects and grips.

  18. Connecting a cognitive architecture to robotic perception

    NASA Astrophysics Data System (ADS)

    Kurup, Unmesh; Lebiere, Christian; Stentz, Anthony; Hebert, Martial

    2012-06-01

    We present an integrated architecture in which perception and cognition interact and provide information to each other leading to improved performance in real-world situations. Our system integrates the Felzenswalb et. al. object-detection algorithm with the ACT-R cognitive architecture. The targeted task is to predict and classify pedestrian behavior in a checkpoint scenario, most specifically to discriminate between normal versus checkpoint-avoiding behavior. The Felzenswalb algorithm is a learning-based algorithm for detecting and localizing objects in images. ACT-R is a cognitive architecture that has been successfully used to model human cognition with a high degree of fidelity on tasks ranging from basic decision-making to the control of complex systems such as driving or air traffic control. The Felzenswalb algorithm detects pedestrians in the image and provides ACT-R a set of features based primarily on their locations. ACT-R uses its pattern-matching capabilities, specifically its partial-matching and blending mechanisms, to track objects across multiple images and classify their behavior based on the sequence of observed features. ACT-R also provides feedback to the Felzenswalb algorithm in the form of expected object locations that allow the algorithm to eliminate false-positives and improve its overall performance. This capability is an instance of the benefits pursued in developing a richer interaction between bottom-up perceptual processes and top-down goal-directed cognition. We trained the system on individual behaviors (only one person in the scene) and evaluated its performance across single and multiple behavior sets.

  19. A multi-camera system for real-time pose estimation

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  20. Object Locating System

    NASA Technical Reports Server (NTRS)

    Arndt, G. Dickey (Inventor); Carl, James R. (Inventor)

    2000-01-01

    A portable system is provided that is operational for determining, with three dimensional resolution, the position of a buried object or approximately positioned object that may move in space or air or gas. The system has a plurality of receivers for detecting the signal front a target antenna and measuring the phase thereof with respect to a reference signal. The relative permittivity and conductivity of the medium in which the object is located is used along with the measured phase signal to determine a distance between the object and each of the plurality of receivers. Knowing these distances. an iteration technique is provided for solving equations simultaneously to provide position coordinates. The system may also be used for tracking movement of an object within close range of the system by sampling and recording subsequent position of the object. A dipole target antenna. when positioned adjacent to a buried object, may be energized using a separate transmitter which couples energy to the target antenna through the medium. The target antenna then preferably resonates at a different frequency, such as a second harmonic of the transmitter frequency.

  1. THREAT ENSEMBLE VULNERABILITY ASSESSMENT ...

    EPA Pesticide Factsheets

    software and manual TEVA-SPOT is used by water utilities to optimize the number and location of contamination detection sensors so that economic and/or public health consequences are minimized. TEVA-SPOT is interactive, allowing a user to specify the minimization objective (e.g., the number of people exposed, the time to detection, or the extent of pipe length contaminated). It also allows a user to specify constraints. For example, a TEVA-SPOT user can employ expert knowledge during the design process by identifying either existing or unfeasible sensor locations. Installation and maintenance costs for sensor placement can also be factored into the analysis. Python and Java are required to run TEVA-SPOT

  2. Moving Object Detection Using Scanning Camera on a High-Precision Intelligent Holder

    PubMed Central

    Chen, Shuoyang; Xu, Tingfa; Li, Daqun; Zhang, Jizhou; Jiang, Shenwang

    2016-01-01

    During the process of moving object detection in an intelligent visual surveillance system, a scenario with complex background is sure to appear. The traditional methods, such as “frame difference” and “optical flow”, may not able to deal with the problem very well. In such scenarios, we use a modified algorithm to do the background modeling work. In this paper, we use edge detection to get an edge difference image just to enhance the ability of resistance illumination variation. Then we use a “multi-block temporal-analyzing LBP (Local Binary Pattern)” algorithm to do the segmentation. In the end, a connected component is used to locate the object. We also produce a hardware platform, the core of which consists of the DSP (Digital Signal Processor) and FPGA (Field Programmable Gate Array) platforms and the high-precision intelligent holder. PMID:27775671

  3. Determination of debris albedo from visible and infrared brightnesses

    NASA Astrophysics Data System (ADS)

    Lambert, John V.; Osteen, Thomas J.; Kraszewski, Butch

    1993-09-01

    The Air Force Phillips Laboratory is conducting measurements to characterize the orbital debris environment using wide-field optical systems located at the Air Force's Maui, Hawaii, Space Surveillance Site. Conversion of the observed visible brightnesses of detected debris objects to physical sizes require knowledge of the albedo (reflectivity). A thermal model for small debris objects has been developed and is used to calculate albedos from simultaneous visible and thermal infrared observations of catalogued debris objects. The model and initial results will be discussed.

  4. The Optical Gravitational Lensing Experiment. Small Amplitude Variable Red Giants in the Magellanic Clouds

    NASA Astrophysics Data System (ADS)

    Soszynski, I.; Udalski, A.; Kubiak, M.; Szymanski, M.; Pietrzynski, G.; Zebrun, K.; Szewczyk, O.; Wyrzykowski, L.

    2004-06-01

    We present analysis of the large sample of variable red giants from the Large and Small Magellanic Clouds detected during the second phase of the Optical Gravitational Lensing Experiment (OGLE-II) and supplemented with OGLE-III photometry. Comparing pulsation properties of detected objects we find that they constitute two groups with clearly distinct features. In this paper we analyze in detail small amplitude variable red giants (about 15400 and 3000 objects in the LMC and SMC, respectively). The vast majority of these objects are multi-periodic. At least 30% of them exhibit two modes closely spaced in the power spectrum, what likely indicates non-radial oscillations. About 50% exhibit additional so called Long Secondary Period. To distinguish between AGB and RGB red giants we compare PL diagrams of multi-periodic red giants located above and below the tip of the Red Giant Branch (TRGB). The giants above the TRGB form four parallel ridges in the PL diagram. Among much more numerous sample of giants below the TRGB we find objects located on the low luminosity extensions of these ridges, but most of the stars are located on the ridges slightly shifted in log P. We interpret the former as the second ascent AGB red giants and the latter as the first ascent RGB objects. Thus, we empirically show that the pulsating red giants fainter than the TRGB are a mixture of RGB and AGB giants. Finally, we compare the Petersen diagrams of the LMC, SMC and Galactic bulge variable red giants and find that they are basically identical indicating that the variable red giants in all these different stellar environments share similar pulsation properties.

  5. The Use of Neutron Analysis Techniques for Detecting The Concentration And Distribution of Chloride Ions in Archaeological Iron

    PubMed Central

    Watkinson, D; Rimmer, M; Kasztovszky, Z; Kis, Z; Maróti, B; Szentmiklósi, L

    2014-01-01

    Chloride (Cl) ions diffuse into iron objects during burial and drive corrosion after excavation. Located under corrosion layers, Cl is inaccessible to many analytical techniques. Neutron analysis offers non-destructive avenues for determining Cl content and distribution in objects. A pilot study used prompt gamma activation analysis (PGAA) and prompt gamma activation imaging (PGAI) to analyse the bulk concentration and longitudinal distribution of Cl in archaeological iron objects. This correlated with the object corrosion rate measured by oxygen consumption, and compared well with Cl measurement using a specific ion meter. High-Cl areas were linked with visible damage to the corrosion layers and attack of the iron core. Neutron techniques have significant advantages in the analysis of archaeological metals, including penetration depth and low detection limits. PMID:26028670

  6. Corollary discharge contributes to perceived eye location in monkeys

    PubMed Central

    Cavanaugh, James; FitzGibbon, Edmond J.; Wurtz, Robert H.

    2013-01-01

    Despite saccades changing the image on the retina several times per second, we still perceive a stable visual world. A possible mechanism underlying this stability is that an internal retinotopic map is updated with each saccade, with the location of objects being compared before and after the saccade. Psychophysical experiments have shown that humans derive such location information from a corollary discharge (CD) accompanying saccades. Such a CD has been identified in the monkey brain in a circuit extending from superior colliculus to frontal cortex. There is a missing piece, however. Perceptual localization is established only in humans and the CD circuit only in monkeys. We therefore extended measurement of perceptual localization to the monkey by adapting the target displacement detection task developed in humans. During saccades to targets, the target disappeared and then reappeared, sometimes at a different location. The monkeys reported the displacement direction. Detections of displacement were similar in monkeys and humans, but enhanced detection of displacement from blanking the target at the end of the saccade was observed only in humans, not in monkeys. Saccade amplitude varied across trials, but the monkey's estimates of target location did not follow that variation, indicating that eye location depended on an internal CD rather than external visual information. We conclude that monkeys use a CD to determine their new eye location after each saccade, just as humans do. PMID:23986562

  7. Corollary discharge contributes to perceived eye location in monkeys.

    PubMed

    Joiner, Wilsaan M; Cavanaugh, James; FitzGibbon, Edmond J; Wurtz, Robert H

    2013-11-01

    Despite saccades changing the image on the retina several times per second, we still perceive a stable visual world. A possible mechanism underlying this stability is that an internal retinotopic map is updated with each saccade, with the location of objects being compared before and after the saccade. Psychophysical experiments have shown that humans derive such location information from a corollary discharge (CD) accompanying saccades. Such a CD has been identified in the monkey brain in a circuit extending from superior colliculus to frontal cortex. There is a missing piece, however. Perceptual localization is established only in humans and the CD circuit only in monkeys. We therefore extended measurement of perceptual localization to the monkey by adapting the target displacement detection task developed in humans. During saccades to targets, the target disappeared and then reappeared, sometimes at a different location. The monkeys reported the displacement direction. Detections of displacement were similar in monkeys and humans, but enhanced detection of displacement from blanking the target at the end of the saccade was observed only in humans, not in monkeys. Saccade amplitude varied across trials, but the monkey's estimates of target location did not follow that variation, indicating that eye location depended on an internal CD rather than external visual information. We conclude that monkeys use a CD to determine their new eye location after each saccade, just as humans do.

  8. Lesion detection and quantification performance of the Tachyon-I time-of-flight PET scanner: phantom and human studies.

    PubMed

    Zhang, Xuezhu; Peng, Qiyu; Zhou, Jian; Huber, Jennifer S; Moses, William W; Qi, Jinyi

    2018-03-16

    The first generation Tachyon PET (Tachyon-I) is a demonstration single-ring PET scanner that reaches a coincidence timing resolution of 314 ps using LSO scintillator crystals coupled to conventional photomultiplier tubes. The objective of this study was to quantify the improvement in both lesion detection and quantification performance resulting from the improved time-of-flight (TOF) capability of the Tachyon-I scanner. We developed a quantitative TOF image reconstruction method for the Tachyon-I and evaluated its TOF gain for lesion detection and quantification. Scans of either a standard NEMA torso phantom or healthy volunteers were used as the normal background data. Separately scanned point source and sphere data were superimposed onto the phantom or human data after accounting for the object attenuation. We used the bootstrap method to generate multiple independent noisy datasets with and without a lesion present. The signal-to-noise ratio (SNR) of a channelized hotelling observer (CHO) was calculated for each lesion size and location combination to evaluate the lesion detection performance. The bias versus standard deviation trade-off of each lesion uptake was also calculated to evaluate the quantification performance. The resulting CHO-SNR measurements showed improved performance in lesion detection with better timing resolution. The detection performance was also dependent on the lesion size and location, in addition to the background object size and shape. The results of bias versus noise trade-off showed that the noise (standard deviation) reduction ratio was about 1.1-1.3 over the TOF 500 ps and 1.5-1.9 over the non-TOF modes, similar to the SNR gains for lesion detection. In conclusion, this Tachyon-I PET study demonstrated the benefit of improved time-of-flight capability on lesion detection and ROI quantification for both phantom and human subjects.

  9. Lesion detection and quantification performance of the Tachyon-I time-of-flight PET scanner: phantom and human studies

    NASA Astrophysics Data System (ADS)

    Zhang, Xuezhu; Peng, Qiyu; Zhou, Jian; Huber, Jennifer S.; Moses, William W.; Qi, Jinyi

    2018-03-01

    The first generation Tachyon PET (Tachyon-I) is a demonstration single-ring PET scanner that reaches a coincidence timing resolution of 314 ps using LSO scintillator crystals coupled to conventional photomultiplier tubes. The objective of this study was to quantify the improvement in both lesion detection and quantification performance resulting from the improved time-of-flight (TOF) capability of the Tachyon-I scanner. We developed a quantitative TOF image reconstruction method for the Tachyon-I and evaluated its TOF gain for lesion detection and quantification. Scans of either a standard NEMA torso phantom or healthy volunteers were used as the normal background data. Separately scanned point source and sphere data were superimposed onto the phantom or human data after accounting for the object attenuation. We used the bootstrap method to generate multiple independent noisy datasets with and without a lesion present. The signal-to-noise ratio (SNR) of a channelized hotelling observer (CHO) was calculated for each lesion size and location combination to evaluate the lesion detection performance. The bias versus standard deviation trade-off of each lesion uptake was also calculated to evaluate the quantification performance. The resulting CHO-SNR measurements showed improved performance in lesion detection with better timing resolution. The detection performance was also dependent on the lesion size and location, in addition to the background object size and shape. The results of bias versus noise trade-off showed that the noise (standard deviation) reduction ratio was about 1.1–1.3 over the TOF 500 ps and 1.5–1.9 over the non-TOF modes, similar to the SNR gains for lesion detection. In conclusion, this Tachyon-I PET study demonstrated the benefit of improved time-of-flight capability on lesion detection and ROI quantification for both phantom and human subjects.

  10. Development of a two wheeled self balancing robot with speech recognition and navigation algorithm

    NASA Astrophysics Data System (ADS)

    Rahman, Md. Muhaimin; Ashik-E-Rasul, Haq, Nowab. Md. Aminul; Hassan, Mehedi; Hasib, Irfan Mohammad Al; Hassan, K. M. Rafidh

    2016-07-01

    This paper is aimed to discuss modeling, construction and development of navigation algorithm of a two wheeled self balancing mobile robot in an enclosure. In this paper, we have discussed the design of two of the main controller algorithms, namely PID algorithms, on the robot model. Simulation is performed in the SIMULINK environment. The controller is developed primarily for self-balancing of the robot and also it's positioning. As for the navigation in an enclosure, template matching algorithm is proposed for precise measurement of the robot position. The navigation system needs to be calibrated before navigation process starts. Almost all of the earlier template matching algorithms that can be found in the open literature can only trace the robot. But the proposed algorithm here can also locate the position of other objects in an enclosure, like furniture, tables etc. This will enable the robot to know the exact location of every stationary object in the enclosure. Moreover, some additional features, such as Speech Recognition and Object Detection, are added. For Object Detection, the single board Computer Raspberry Pi is used. The system is programmed to analyze images captured via the camera, which are then processed through background subtraction, followed by active noise reduction.

  11. Diver-based integrated navigation/sonar sensor

    NASA Astrophysics Data System (ADS)

    Lent, Keith H.

    1999-07-01

    Two diver based systems, the Small Object Locating Sonar (SOLS) and the Integrated Navigation and Sonar Sensor (INSS) have been developed at Applied Research Laboratories, the University of Texas at Austin (ARL:UT). They are small and easy to use systems that allow a diver to: detect, classify, and identify underwater objects; render large sector visual images; and track, map and reacquire diver location, diver path, and target locations. The INSS hardware consists of a unique, simple, single beam high resolution sonar, an acoustic navigation systems, an electronic depth gauge, compass, and GPS and RF interfaces, all integrated with a standard 486 based PC. These diver sonars have been evaluated by the very shallow water mine countermeasure detachment since spring 1997. Results are very positive, showing significantly greater capabilities than current diver held systems. For example, the detection ranges are increased over existing systems, and the system allows the divers to classify mines at a significant stand off range. As a result, the INSS design has been chosen for acquisition as the next generation diver navigation and sonar system. The EDMs for this system will be designed and built by ARL:UT during 1998 and 1999 with production planned in 2000.

  12. Distribution majorization of corner points by reinforcement learning for moving object detection

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Yu, Hao; Zhou, Dongxiang; Cheng, Yongqiang

    2018-04-01

    Corner points play an important role in moving object detection, especially in the case of free-moving camera. Corner points provide more accurate information than other pixels and reduce the computation which is unnecessary. Previous works only use intensity information to locate the corner points, however, the information that former and the last frames provided also can be used. We utilize the information to focus on more valuable area and ignore the invaluable area. The proposed algorithm is based on reinforcement learning, which regards the detection of corner points as a Markov process. In the Markov model, the video to be detected is regarded as environment, the selections of blocks for one corner point are regarded as actions and the performance of detection is regarded as state. Corner points are assigned to be the blocks which are seperated from original whole image. Experimentally, we select a conventional method which uses marching and Random Sample Consensus algorithm to obtain objects as the main framework and utilize our algorithm to improve the result. The comparison between the conventional method and the same one with our algorithm show that our algorithm reduce 70% of the false detection.

  13. A focus of attention mechanism for gaze control within a framework for intelligent image analysis tools

    NASA Astrophysics Data System (ADS)

    Rodrigo, Ranga P.; Ranaweera, Kamal; Samarabandu, Jagath K.

    2004-05-01

    Focus of attention is often attributed to biological vision system where the entire field of view is first monitored and then the attention is focused to the object of interest. We propose using a similar approach for object recognition in a color image sequence. The intention is to locate an object based on a prior motive, concentrate on the detected object so that the imaging device can be guided toward it. We use the abilities of the intelligent image analysis framework developed in our laboratory to generate an algorithm dynamically to detect the particular type of object based on the user's object description. The proposed method uses color clustering along with segmentation. The segmented image with labeled regions is used to calculate the shape descriptor parameters. These and the color information are matched with the input description. Gaze is then controlled by issuing camera movement commands as appropriate. We present some preliminary results that demonstrate the success of this approach.

  14. Multiple-object permanence tracking: limitation in maintenance and transformation of perceptual objects.

    PubMed

    Saiki, Jun

    2002-01-01

    Research on change blindness and transsaccadic memory revealed that a limited amount of information is retained across visual disruptions in visual working memory. It has been proposed that visual working memory can hold four to five coherent object representations. To investigate their maintenance and transformation in dynamic situations, I devised an experimental paradigm called multiple-object permanence tracking (MOPT) that measures memory for multiple feature-location bindings in dynamic situations. Observers were asked to detect any color switch in the middle of a regular rotation of a pattern with multiple colored disks behind an occluder. The color-switch detection performance dramatically declined as the pattern rotation velocity increased, and this effect of object motion was independent of the number of targets. The MOPT task with various shapes and colors showed that color-shape conjunctions are not available in the MOPT task. These results suggest that even completely predictable motion severely reduces our capacity of object representations, from four to only one or two.

  15. Automatic Earthquake Detection and Location by Waveform coherency in Alentejo (South Portugal) Using CatchPy

    NASA Astrophysics Data System (ADS)

    Custodio, S.; Matos, C.; Grigoli, F.; Cesca, S.; Heimann, S.; Rio, I.

    2015-12-01

    Seismic data processing is currently undergoing a step change, benefitting from high-volume datasets and advanced computer power. In the last decade, a permanent seismic network of 30 broadband stations, complemented by dense temporary deployments, covered mainland Portugal. This outstanding regional coverage currently enables the computation of a high-resolution image of the seismicity of Portugal, which contributes to fitting together the pieces of the regional seismo-tectonic puzzle. Although traditional manual inspections are valuable to refine automatic results they are impracticable with the big data volumes now available. When conducted alone they are also less objective since the criteria is defined by the analyst. In this work we present CatchPy, a scanning algorithm to detect earthquakes in continuous datasets. Our main goal is to implement an automatic earthquake detection and location routine in order to have a tool to quickly process large data sets, while at the same time detecting low magnitude earthquakes (i.e. lowering the detection threshold). CatchPY is designed to produce an event database that could be easily located using existing location codes (e.g.: Grigoli et al. 2013, 2014). We use CatchPy to perform automatic detection and location of earthquakes that occurred in Alentejo region (South Portugal), taking advantage of a dense seismic network deployed in the region for two years during the DOCTAR experiment. Results show that our automatic procedure is particularly suitable for small aperture networks. The event detection is performed by continuously computing the short-term-average/long-term-average of two different characteristic functions (CFs). For the P phases we used a CF based on the vertical energy trace while for S phases we used a CF based on the maximum eigenvalue of the instantaneous covariance matrix (Vidale 1991). Seismic event location is performed by waveform coherence analysis, scanning different hypocentral coordinates (Grigoli et al. 2013, 2014). The reliability of automatic detections, phase pickings and locations are tested trough the quantitative comparison with manual results. This work is supported by project QuakeLoc, reference: PTDC/GEO-FIQ/3522/2012

  16. Sensor Fusion to Infer Locations of Standing and Reaching Within the Home in Incomplete Spinal Cord Injury.

    PubMed

    Lonini, Luca; Reissman, Timothy; Ochoa, Jose M; Mummidisetty, Chaithanya K; Kording, Konrad; Jayaraman, Arun

    2017-10-01

    The objective of rehabilitation after spinal cord injury is to enable successful function in everyday life and independence at home. Clinical tests can assess whether patients are able to execute functional movements but are limited in assessing such information at home. A prototype system is developed that detects stand-to-reach activities, a movement with important functional implications, at multiple locations within a mock kitchen. Ten individuals with incomplete spinal cord injuries performed a sequence of standing and reaching tasks. The system monitored their movements by combining two sources of information: a triaxial accelerometer, placed on the subject's thigh, detected sitting or standing, and a network of radio frequency tags, wirelessly connected to a wrist-worn device, detected reaching at three locations. A threshold-based algorithm detected execution of the combined tasks and accuracy was measured by the number of correctly identified events. The system was shown to have an average accuracy of 98% for inferring when individuals performed stand-to-reach activities at each tag location within the same room. The combination of accelerometry and tags yielded accurate assessments of functional stand-to-reach activities within a home environment. Optimization of this technology could simplify patient compliance and allow clinicians to assess functional home activities.

  17. Localized Detection of Abandoned Luggage

    NASA Astrophysics Data System (ADS)

    Chang, Jing-Ying; Liao, Huei-Hung; Chen, Liang-Gee

    2010-12-01

    Abandoned luggage represents a potential threat to public safety. Identifying objects as luggage, identifying the owners of such objects, and identifying whether owners have left luggage behind are the three main problems requiring solution. This paper proposes two techniques which are "foreground-mask sampling" to detect luggage with arbitrary appearance and "selective tracking" to locate and to track owners based solely on looking only at the neighborhood of the luggage. Experimental results demonstrate that once an owner abandons luggage and leaves the scene, the alarm fires within few seconds. The average processing speed of the approach is 17.37 frames per second, which is sufficient for real world applications.

  18. A multisensor system for detection and characterization of UXO(MM-0437) - Demonstration Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasperikova, Erika; Smith, J.T.; Morrison, H.F.

    2006-06-01

    The Berkeley UXO discriminator (BUD) (Figure 1) is a portable Active Electromagnetic (AEM) system for UXO detection and characterization that quickly determines the location, size, and symmetry properties of a suspected UXO. The BUD comprises of three orthogonal transmitters that 'illuminate' a target with fields in three independent directions in order to stimulate the three polarization modes that, in general, characterize the target EM response. In addition, the BUD uses eight pairs of differenced receivers for response recording. Eight receiver coils are placed horizontally along the two diagonals of the upper and lower planes of the two horizontal transmitter loops.more » These receiver coil pairs are located on symmetry lines through the center of the system and each pair sees identical fields during the on-time of the pulse in all of the transmitter coils. They are wired in opposition to produce zero output during the on-time of the pulses in three orthogonal transmitters. Moreover, this configuration dramatically reduces noise in the measurements by canceling the background electromagnetic fields (these fields are uniform over the scale of the receiver array and are consequently nulled by the differencing operation), and by canceling the noise contributed by the tilt of the receivers in the Earth's magnetic field, and greatly enhances receivers sensitivity to the gradients of the target response. The BUD performs target characterization from a single position of the sensor platform above a target. BUD was designed to detect and characterize UXO in the 20 mm to 155 mm size range for depths between 0 and 1 m. The relationship between the object size and the depth at which it can be detected is illustrated in Figure 2. This curve was calculated for BUD assuming that the receiver plane is 20 cm above the ground. Figure 2 shows that, for example, BUD can detect and characterize an object with 10 cm diameter down to the depth of 90 cm with depth uncertainty of 10%. Any objects buried at the depth more than 1 m have a low probability of detection. With existing algorithms in the system computer it is not possible to recover the principal polarizabilities of large objects close to the system. Detection of large shallow objects is assured, but at present real time discrimination for shallow objects is not. Post processing of the field data is required for shape discrimination of large shallow targets. Next generation of BUD software will not have this limitation. Successful application of the inversion algorithm that solves for the target parameters is contingent upon resolution of this limitation. At the moment, interpretation software is developed for a single object only. In case of multiple objects the software indicates the presence of a cluster of objects but is unable to provide characteristics of each individual object.« less

  19. Using Goal- and Grip-Related Information for Understanding the Correctness of Other’s Actions: An ERP Study

    PubMed Central

    van Elk, Michiel; Bousardt, Roel; Bekkering, Harold; van Schie, Hein T.

    2012-01-01

    Detecting errors in other’s actions is of pivotal importance for joint action, competitive behavior and observational learning. Although many studies have focused on the neural mechanisms involved in detecting low-level errors, relatively little is known about error-detection in everyday situations. The present study aimed to identify the functional and neural mechanisms whereby we understand the correctness of other’s actions involving well-known objects (e.g. pouring coffee in a cup). Participants observed action sequences in which the correctness of the object grasped and the grip applied to a pair of objects were independently manipulated. Observation of object violations (e.g. grasping the empty cup instead of the coffee pot) resulted in a stronger P3-effect than observation of grip errors (e.g. grasping the coffee pot at the upper part instead of the handle), likely reflecting a reorienting response, directing attention to the relevant location. Following the P3-effect, a parietal slow wave positivity was observed that persisted for grip-errors, likely reflecting the detection of an incorrect hand-object interaction. These findings provide new insight in the functional significance of the neurophysiological markers associated with the observation of incorrect actions and suggest that the P3-effect and the subsequent parietal slow wave positivity may reflect the detection of errors at different levels in the action hierarchy. Thereby this study elucidates the cognitive processes that support the detection of action violations in the selection of objects and grips. PMID:22606261

  20. Aerial surveillance based on hierarchical object classification for ground target detection

    NASA Astrophysics Data System (ADS)

    Vázquez-Cervantes, Alberto; García-Huerta, Juan-Manuel; Hernández-Díaz, Teresa; Soto-Cajiga, J. A.; Jiménez-Hernández, Hugo

    2015-03-01

    Unmanned aerial vehicles have turned important in surveillance application due to the flexibility and ability to inspect and displace in different regions of interest. The instrumentation and autonomy of these vehicles have been increased; i.e. the camera sensor is now integrated. Mounted cameras allow flexibility to monitor several regions of interest, displacing and changing the camera view. A well common task performed by this kind of vehicles correspond to object localization and tracking. This work presents a hierarchical novel algorithm to detect and locate objects. The algorithm is based on a detection-by-example approach; this is, the target evidence is provided at the beginning of the vehicle's route. Afterwards, the vehicle inspects the scenario, detecting all similar objects through UTM-GPS coordinate references. Detection process consists on a sampling information process of the target object. Sampling process encode in a hierarchical tree with different sampling's densities. Coding space correspond to a huge binary space dimension. Properties such as independence and associative operators are defined in this space to construct a relation between the target object and a set of selected features. Different densities of sampling are used to discriminate from general to particular features that correspond to the target. The hierarchy is used as a way to adapt the complexity of the algorithm due to optimized battery duty cycle of the aerial device. Finally, this approach is tested in several outdoors scenarios, proving that the hierarchical algorithm works efficiently under several conditions.

  1. The effect of a finite focal spot size on location dependent detectability in a fan beam CT system

    NASA Astrophysics Data System (ADS)

    Kim, Byeongjoon; Baek, Jongduk

    2017-03-01

    A finite focal spot size is one of the sources to degrade the resolution performance in a fan beam CT system. In this work, we investigated the effect of the finite focal spot size on signal detectability. For the evaluation, five spherical objects with diameters of 1 mm, 2 mm, 3 mm, 4 mm, and 5 mm were used. The optical focal spot size viewed at the iso-center was a 1 mm (height) × 1 mm (width) with a target angle of 7 degrees, corresponding to an 8.21 mm (i.e., 1 mm / sin (7°)) focal spot length. Simulated projection data were acquired using 8 × 8 source lets, and reconstructed by Hanning weighted filtered backprojection. For each spherical object, the detectability was calculated at (0 mm, 0 mm) and (0 mm, 200 mm) using two image quality metrics: pixel signal to noise ratio (SNR) and detection SNR. For all signal sizes, the pixel SNR is higher at the iso-center since the noise variance at the off-center is much higher than that at the iso-center due to the backprojection weightings used in direct fan beam reconstruction. In contrast, detection SNR shows similar values for different spherical objects except 1 mm and 2 mm diameter spherical objects. Overall, the results indicate the resolution loss caused by the finite focal spot size degrades the detection performance, especially for small objects with less than 2 mm diameter.

  2. The role of optimality in characterizing CO2 seepage from geological carbon sequestration sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cortis, Andrea; Oldenburg, Curtis M.; Benson, Sally M.

    Storage of large amounts of carbon dioxide (CO{sub 2}) in deep geological formations for greenhouse gas mitigation is gaining momentum and moving from its conceptual and testing stages towards widespread application. In this work we explore various optimization strategies for characterizing surface leakage (seepage) using near-surface measurement approaches such as accumulation chambers and eddy covariance towers. Seepage characterization objectives and limitations need to be defined carefully from the outset especially in light of large natural background variations that can mask seepage. The cost and sensitivity of seepage detection are related to four critical length scales pertaining to the size ofmore » the: (1) region that needs to be monitored; (2) footprint of the measurement approach, and (3) main seepage zone; and (4) region in which concentrations or fluxes are influenced by seepage. Seepage characterization objectives may include one or all of the tasks of detecting, locating, and quantifying seepage. Each of these tasks has its own optimal strategy. Detecting and locating seepage in a region in which there is no expected or preferred location for seepage nor existing evidence for seepage requires monitoring on a fixed grid, e.g., using eddy covariance towers. The fixed-grid approaches needed to detect seepage are expected to require large numbers of eddy covariance towers for large-scale geologic CO{sub 2} storage. Once seepage has been detected and roughly located, seepage zones and features can be optimally pinpointed through a dynamic search strategy, e.g., employing accumulation chambers and/or soil-gas sampling. Quantification of seepage rates can be done through measurements on a localized fixed grid once the seepage is pinpointed. Background measurements are essential for seepage detection in natural ecosystems. Artificial neural networks are considered as regression models useful for distinguishing natural system behavior from anomalous behavior suggestive of CO{sub 2} seepage without need for detailed understanding of natural system processes. Because of the local extrema in CO{sub 2} fluxes and concentrations in natural systems, simple steepest-descent algorithms are not effective and evolutionary computation algorithms are proposed as a paradigm for dynamic monitoring networks to pinpoint CO{sub 2} seepage areas.« less

  3. Location detection and tracking of moving targets by a 2D IR-UWB radar system.

    PubMed

    Nguyen, Van-Han; Pyun, Jae-Young

    2015-03-19

    In indoor environments, the Global Positioning System (GPS) and long-range tracking radar systems are not optimal, because of signal propagation limitations in the indoor environment. In recent years, the use of ultra-wide band (UWB) technology has become a possible solution for object detection, localization and tracking in indoor environments, because of its high range resolution, compact size and low cost. This paper presents improved target detection and tracking techniques for moving objects with impulse-radio UWB (IR-UWB) radar in a short-range indoor area. This is achieved through signal-processing steps, such as clutter reduction, target detection, target localization and tracking. In this paper, we introduce a new combination consisting of our proposed signal-processing procedures. In the clutter-reduction step, a filtering method that uses a Kalman filter (KF) is proposed. Then, in the target detection step, a modification of the conventional CLEAN algorithm which is used to estimate the impulse response from observation region is applied for the advanced elimination of false alarms. Then, the output is fed into the target localization and tracking step, in which the target location and trajectory are determined and tracked by using unscented KF in two-dimensional coordinates. In each step, the proposed methods are compared to conventional methods to demonstrate the differences in performance. The experiments are carried out using actual IR-UWB radar under different scenarios. The results verify that the proposed methods can improve the probability and efficiency of target detection and tracking.

  4. Long-term object tracking combined offline with online learning

    NASA Astrophysics Data System (ADS)

    Hu, Mengjie; Wei, Zhenzhong; Zhang, Guangjun

    2016-04-01

    We propose a simple yet effective method for long-term object tracking. Different from the traditional visual tracking method, which mainly depends on frame-to-frame correspondence, we combine high-level semantic information with low-level correspondences. Our framework is formulated in a confidence selection framework, which allows our system to recover from drift and partly deal with occlusion. To summarize, our algorithm can be roughly decomposed into an initialization stage and a tracking stage. In the initialization stage, an offline detector is trained to get the object appearance information at the category level, which is used for detecting the potential target and initializing the tracking stage. The tracking stage consists of three modules: the online tracking module, detection module, and decision module. A pretrained detector is used for maintaining drift of the online tracker, while the online tracker is used for filtering out false positive detections. A confidence selection mechanism is proposed to optimize the object location based on the online tracker and detection. If the target is lost, the pretrained detector is utilized to reinitialize the whole algorithm when the target is relocated. During experiments, we evaluate our method on several challenging video sequences, and it demonstrates huge improvement compared with detection and online tracking only.

  5. LLNL Location and Detection Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, S C; Harris, D B; Anderson, M L

    2003-07-16

    We present two LLNL research projects in the topical areas of location and detection. The first project assesses epicenter accuracy using a multiple-event location algorithm, and the second project employs waveform subspace Correlation to detect and identify events at Fennoscandian mines. Accurately located seismic events are the bases of location calibration. A well-characterized set of calibration events enables new Earth model development, empirical calibration, and validation of models. In a recent study, Bondar et al. (2003) develop network coverage criteria for assessing the accuracy of event locations that are determined using single-event, linearized inversion methods. These criteria are conservative andmore » are meant for application to large bulletins where emphasis is on catalog completeness and any given event location may be improved through detailed analysis or application of advanced algorithms. Relative event location techniques are touted as advancements that may improve absolute location accuracy by (1) ensuring an internally consistent dataset, (2) constraining a subset of events to known locations, and (3) taking advantage of station and event correlation structure. Here we present the preliminary phase of this work in which we use Nevada Test Site (NTS) nuclear explosions, with known locations, to test the effect of travel-time model accuracy on relative location accuracy. Like previous studies, we find that the reference velocity-model and relative-location accuracy are highly correlated. We also find that metrics based on travel-time residual of relocated events are not a reliable for assessing either velocity-model or relative-location accuracy. In the topical area of detection, we develop specialized correlation (subspace) detectors for the principal mines surrounding the ARCES station located in the European Arctic. Our objective is to provide efficient screens for explosions occurring in the mines of the Kola Peninsula (Kovdor, Zapolyarny, Olenogorsk, Khibiny) and the major iron mines of northern Sweden (Malmberget, Kiruna). In excess of 90% of the events detected by the ARCES station are mining explosions, and a significant fraction are from these northern mining groups. The primary challenge in developing waveform correlation detectors is the degree of variation in the source time histories of the shots, which can result in poor correlation among events even in close proximity. Our approach to solving this problem is to use lagged subspace correlation detectors, which offer some prospect of compensating for variation and uncertainty in source time functions.« less

  6. Thermal inertia mapping of below ground objects and voids

    NASA Astrophysics Data System (ADS)

    Del Grande, Nancy K.; Ascough, Brian M.; Rumpf, Richard L.

    2013-05-01

    Thermal inertia (effusivity) contrast marks the borders of naturally heated below ground object and void sites. The Dual Infrared Effusivity Computed Tomography (DIRECT) method, patent pending, detects and locates the presence of enhanced heat flows from below ground object and void sites at a given area. DIRECT maps view contrasting surface temperature differences between sites with normal soil and sites with soil disturbed by subsurface, hollow or semi-empty object voids (or air gaps) at varying depths. DIRECT utilizes an empirical database created to optimize the scheduling of daily airborne thermal surveys to view and characterize unseen object and void types, depths and volumes in "blind" areas.

  7. Some of the thousand words a picture is worth.

    PubMed

    Mandler, J M; Johnson, N S

    1976-09-01

    The effects of real-world schemata on recognition of complex pictures were studied. Two kinds of pictures were used: pictures of objects forming real-world scenes and unorganized collections of the same objects. The recognition test employed distractors that varied four types of information: inventory, spatial location, descriptive and spatial composition. Results emphasized the selective nature of schemata since superior recognition of one kind of information was offset by loss of another. Spatial location information was better recognized in real-world scenes and spatial composition information was better recognized in unorganized scenes. Organized and unorganized pictures did not differ with respect of inventory and descriptive information. The longer the pictures were studied, the longer subjects took to recognize them. Reaction time for hits, misses, and false alarms increased dramatically as presentation time increased from 5 to 60 sec. It was suggested that detection of a difference in a distractor terminated search, but that when no difference was detected, an exhaustive search of the available information took place.

  8. Locatable-Body Temperature Monitoring Based on Semi-Active UHF RFID Tags

    PubMed Central

    Liu, Guangwei; Mao, Luhong; Chen, Liying; Xie, Sheng

    2014-01-01

    This paper presents the use of radio-frequency identification (RFID) technology for the real-time remote monitoring of body temperature, while an associated program can determine the location of the body carrying the respective sensor. The RFID chip's internal integrated temperature sensor is used for both the human-body temperature detection and as a measurement device, while using radio-frequency communication to broadcast the temperature information. The adopted RFID location technology makes use of reference tags together with a nearest neighbor localization algorithm and a multiple-antenna time-division multiplexing location system. A graphical user interface (GUI) was developed for collecting temperature and location data for the data fusion by using RFID protocols. With a puppy as test object, temperature detection and localization experiments were carried out. The measured results show that the applied method, when using a mercury thermometer for comparison in terms of measuring the temperature of the dog, has a good consistency, with an average temperature error of 0.283 °C. When using the associated program over the area of 12.25 m2, the average location error is of 0.461 m, which verifies the feasibility of the sensor-carrier location by using the proposed program. PMID:24675759

  9. Locatable-body temperature monitoring based on semi-active UHF RFID tags.

    PubMed

    Liu, Guangwei; Mao, Luhong; Chen, Liying; Xie, Sheng

    2014-03-26

    This paper presents the use of radio-frequency identification (RFID) technology for the real-time remote monitoring of body temperature, while an associated program can determine the location of the body carrying the respective sensor. The RFID chip's internal integrated temperature sensor is used for both the human-body temperature detection and as a measurement device, while using radio-frequency communication to broadcast the temperature information. The adopted RFID location technology makes use of reference tags together with a nearest neighbor localization algorithm and a multiple-antenna time-division multiplexing location system. A graphical user interface (GUI) was developed for collecting temperature and location data for the data fusion by using RFID protocols. With a puppy as test object, temperature detection and localization experiments were carried out. The measured results show that the applied method, when using a mercury thermometer for comparison in terms of measuring the temperature of the dog, has a good consistency, with an average temperature error of 0.283 °C. When using the associated program over the area of 12.25 m2, the average location error is of 0.461 m, which verifies the feasibility of the sensor-carrier location by using the proposed program.

  10. Failure prediction in ceramic composites using acoustic emission and digital image correlation

    NASA Astrophysics Data System (ADS)

    Whitlow, Travis; Jones, Eric; Przybyla, Craig

    2016-02-01

    The objective of the work performed here was to develop a methodology for linking in-situ detection of localized matrix cracking to the final failure location in continuous fiber reinforced CMCs. First, the initiation and growth of matrix cracking are measured and triangulated via acoustic emission (AE) detection. High amplitude events at relatively low static loads can be associated with initiation of large matrix cracks. When there is a localization of high amplitude events, a measurable effect on the strain field can be observed. Full field surface strain measurements were obtained using digital image correlation (DIC). An analysis using the combination of the AE and DIC data was able to predict the final failure location.

  11. Resonant ultrasound spectroscopy

    DOEpatents

    Migliori, Albert

    1991-01-01

    A resonant ultrasound spectroscopy method provides a unique characterization of an object for use in distinguishing similar objects having physical differences greater than a predetermined tolerance. A resonant response spectrum is obtained for a reference object by placing excitation and detection transducers at any accessible location on the object. The spectrum is analyzed to determine the number of resonant response peaks in a predetermined frequency interval. The distribution of the resonance frequencies is then characterized in a manner effective to form a unique signature of the object. In one characterization, a small frequency interval is defined and stepped though the spectrum frequency range. Subsequent objects are similarly characterized where the characterizations serve as signatures effective to distinguish objects that differ from the reference object by more than the predetermined tolerance.

  12. Inside-the-wall detection of objects with low metal content using the GPR sensor: effects of different wall structures on the detection performance

    NASA Astrophysics Data System (ADS)

    Dogan, Mesut; Yesilyurt, Omer; Turhan-Sayan, Gonul

    2018-04-01

    Ground penetrating radar (GPR) is an ultra-wideband electromagnetic sensor used not only for subsurface sensing but also for the detection of objects which may be hidden behind a wall or inserted within the wall. Such applications of the GPR technology are used in both military and civilian operations such as mine or IED (improvised explosive device) detection, rescue missions after earthquakes and investigation of archeological sites. Detection of concealed objects with low metal content is known to be a challenging problem in general. Use of A-scan, B-scan and C-scan GPR data in combination provides valuable information for target recognition in such applications. In this paper, we study the problem of target detection for potentially explosive objects embedded inside a wall. GPR data is numerically simulated by using an FDTD-based numerical computation tool when dielectric targets and targets with low metal content are inserted into different types of walls. A small size plastic bottle filled with trinitrotoluene (TNT) is used as the target with and without a metal fuse in it. The targets are buried into two different types of wall; a homogeneous brick wall and an inhomogeneous wall constructed by bricks having periodically located air holes in it. Effects of using an inhomogeneous wall structure with internal boundaries are investigated as a challenging scenario, paying special attention to preprocessing.

  13. Object-based change detection: dimension of damage in residential areas of Abu Suruj, Sudan

    NASA Astrophysics Data System (ADS)

    Demharter, Timo; Michel, Ulrich; Ehlers, Manfred; Reinartz, Peter

    2011-11-01

    Given the importance of Change Detection, especially in the field of crisis management, this paper discusses the advantage of object-based Change Detection. This project and the used methods give an opportunity to coordinate relief actions strategically. The principal objective of this project was to develop an algorithm which allows to detect rapidly damaged and destroyed buildings in the area of Abu Suruj. This Sudanese village is located in West-Darfur and has become the victim of civil war. The software eCognition Developer was used to per-form an object-based Change Detection on two panchromatic Quickbird 2 images from two different time slots. The first image shows the area before, the second image shows the area after the massacres in this region. Seeking a classification for the huts of the Sudanese town Abu Suruj was reached by first segmenting the huts and then classifying them on the basis of geo-metrical and brightness-related values. The huts were classified as "new", "destroyed" and "preserved" with the help of a automated algorithm. Finally the results were presented in the form of a map which displays the different conditions of the huts. The accuracy of the project is validated by an accuracy assessment resulting in an Overall Classification Accuracy of 90.50 percent. These change detection results allow aid organizations to provide quick and efficient help where it is needed the most.

  14. Two (or three) is one too many: testing the flexibility of contextual cueing with multiple target locations.

    PubMed

    Zellin, Martina; Conci, Markus; von Mühlenen, Adrian; Müller, Hermann J

    2011-10-01

    Visual search for a target object is facilitated when the object is repeatedly presented within an invariant context of surrounding items ("contextual cueing"; Chun & Jiang, Cognitive Psychology, 36, 28-71, 1998). The present study investigated whether such invariant contexts can cue more than one target location. In a series of three experiments, we showed that contextual cueing is significantly reduced when invariant contexts are paired with two rather than one possible target location, whereas no contextual cueing occurs with three distinct target locations. Closer data inspection revealed that one "dominant" target always exhibited substantially more contextual cueing than did the other, "minor" target(s), which caused negative contextual-cueing effects. However, minor targets could benefit from the invariant context when they were spatially close to the dominant target. In sum, our experiments suggest that contextual cueing can guide visual attention to a spatially limited region of the display, only enhancing the detection of targets presented inside that region.

  15. Detecting wood surface defects with fusion algorithm of visual saliency and local threshold segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng

    2018-04-01

    This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.

  16. Lymph node detection in IASLC-defined zones on PET/CT images

    NASA Astrophysics Data System (ADS)

    Song, Yihua; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.

    2016-03-01

    Lymph node detection is challenging due to the low contrast between lymph nodes as well as surrounding soft tissues and the variation in nodal size and shape. In this paper, we propose several novel ideas which are combined into a system to operate on positron emission tomography/ computed tomography (PET/CT) images to detect abnormal thoracic nodes. First, our previous Automatic Anatomy Recognition (AAR) approach is modified where lymph node zones predominantly following International Association for the Study of Lung Cancer (IASLC) specifications are modeled as objects arranged in a hierarchy along with key anatomic anchor objects. This fuzzy anatomy model built from diagnostic CT images is then deployed on PET/CT images for automatically recognizing the zones. A novel globular filter (g-filter) to detect blob-like objects over a specified range of sizes is designed to detect the most likely locations and sizes of diseased nodes. Abnormal nodes within each automatically localized zone are subsequently detected via combined use of different items of information at various scales: lymph node zone model poses found at recognition indicating the geographic layout at the global level of node clusters, g-filter response which hones in on and carefully selects node-like globular objects at the node level, and CT and PET gray value but within only the most plausible nodal regions for node presence at the voxel level. The models are built from 25 diagnostic CT scans and refined for an object hierarchy based on a separate set of 20 diagnostic CT scans. Node detection is tested on an additional set of 20 PET/CT scans. Our preliminary results indicate node detection sensitivity and specificity at around 90% and 85%, respectively.

  17. Object detection in cinematographic video sequences for automatic indexing

    NASA Astrophysics Data System (ADS)

    Stauder, Jurgen; Chupeau, Bertrand; Oisel, Lionel

    2003-06-01

    This paper presents an object detection framework applied to cinematographic post-processing of video sequences. Post-processing is done after production and before editing. At the beginning of each shot of a video, a slate (also called clapperboard) is shown. The slate contains notably an electronic audio timecode that is necessary for audio-visual synchronization. This paper presents an object detection framework to detect slates in video sequences for automatic indexing and post-processing. It is based on five steps. The first two steps aim to reduce drastically the video data to be analyzed. They ensure high recall rate but have low precision. The first step detects images at the beginning of a shot possibly showing up a slate while the second step searches in these images for candidates regions with color distribution similar to slates. The objective is to not miss any slate while eliminating long parts of video without slate appearance. The third and fourth steps are statistical classification and pattern matching to detected and precisely locate slates in candidate regions. These steps ensure high recall rate and high precision. The objective is to detect slates with very little false alarms to minimize interactive corrections. In a last step, electronic timecodes are read from slates to automize audio-visual synchronization. The presented slate detector has a recall rate of 89% and a precision of 97,5%. By temporal integration, much more than 89% of shots in dailies are detected. By timecode coherence analysis, the precision can be raised too. Issues for future work are to accelerate the system to be faster than real-time and to extend the framework for several slate types.

  18. Development of an accurate transmission line fault locator using the global positioning system satellites

    NASA Technical Reports Server (NTRS)

    Lee, Harry

    1994-01-01

    A highly accurate transmission line fault locator based on the traveling-wave principle was developed and successfully operated within B.C. Hydro. A transmission line fault produces a fast-risetime traveling wave at the fault point which propagates along the transmission line. This fault locator system consists of traveling wave detectors located at key substations which detect and time tag the leading edge of the fault-generated traveling wave as if passes through. A master station gathers the time-tagged information from the remote detectors and determines the location of the fault. Precise time is a key element to the success of this system. This fault locator system derives its timing from the Global Positioning System (GPS) satellites. System tests confirmed the accuracy of locating faults to within the design objective of +/-300 meters.

  19. Incoherent coincidence imaging of space objects

    NASA Astrophysics Data System (ADS)

    Mao, Tianyi; Chen, Qian; He, Weiji; Gu, Guohua

    2016-10-01

    Incoherent Coincidence Imaging (ICI), which is based on the second or higher order correlation of fluctuating light field, has provided great potentialities with respect to standard conventional imaging. However, the deployment of reference arm limits its practical applications in the detection of space objects. In this article, an optical aperture synthesis with electronically connected single-pixel photo-detectors was proposed to remove the reference arm. The correlation in our proposed method is the second order correlation between the intensity fluctuations observed by any two detectors. With appropriate locations of single-pixel detectors, this second order correlation is simplified to absolute-square Fourier transform of source and the unknown object. We demonstrate the image recovery with the Gerchberg-Saxton-like algorithms and investigate the reconstruction quality of our approach. Numerical experiments has been made to show that both binary and gray-scale objects can be recovered. This proposed method provides an effective approach to promote detection of space objects and perhaps even the exo-planets.

  20. Microearthquake Studies at the Salton Sea Geothermal Field

    DOE Data Explorer

    Templeton, Dennise

    2013-10-01

    The objective of this project is to detect and locate microearthquakes to aid in the characterization of reservoir fracture networks. Accurate identification and mapping of the large numbers of microearthquakes induced in EGS is one technique that provides diagnostic information when determining the location, orientation and length of underground crack systems for use in reservoir development and management applications. Conventional earthquake location techniques often are employed to locate microearthquakes. However, these techniques require labor-intensive picking of individual seismic phase onsets across a network of sensors. For this project we adapt the Matched Field Processing (MFP) technique to the elastic propagation problem in geothermal reservoirs to identify more and smaller events than traditional methods alone.

  1. Privacy Protection Versus Cluster Detection in Spatial Epidemiology

    PubMed Central

    Olson, Karen L.; Grannis, Shaun J.; Mandl, Kenneth D.

    2006-01-01

    Objectives. Patient data that includes precise locations can reveal patients’ identities, whereas data aggregated into administrative regions may preserve privacy and confidentiality. We investigated the effect of varying degrees of address precision (exact latitude and longitude vs the center points of zip code or census tracts) on detection of spatial clusters of cases. Methods. We simulated disease outbreaks by adding supplementary spatially clustered emergency department visits to authentic hospital emergency department syndromic surveillance data. We identified clusters with a spatial scan statistic and evaluated detection rate and accuracy. Results. More clusters were identified, and clusters were more accurately detected, when exact locations were used. That is, these clusters contained at least half of the simulated points and involved few additional emergency department visits. These results were especially apparent when the synthetic clustered points crossed administrative boundaries and fell into multiple zip code or census tracts. Conclusions. The spatial cluster detection algorithm performed better when addresses were analyzed as exact locations than when they were analyzed as center points of zip code or census tracts, particularly when the clustered points crossed administrative boundaries. Use of precise addresses offers improved performance, but this practice must be weighed against privacy concerns in the establishment of public health data exchange policies. PMID:17018828

  2. Detecting and Analyzing Multiple Moving Objects in Crowded Environments with Coherent Motion Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheriyadat, Anil M.

    Understanding the world around us from large-scale video data requires vision systems that can perform automatic interpretation. While human eyes can unconsciously perceive independent objects in crowded scenes and other challenging operating environments, automated systems have difficulty detecting, counting, and understanding their behavior in similar scenes. Computer scientists at ORNL have a developed a technology termed as "Coherent Motion Region Detection" that invloves identifying multiple indepedent moving objects in crowded scenes by aggregating low-level motion cues extracted from moving objects. Humans and other species exploit such low-level motion cues seamlessely to perform perceptual grouping for visual understanding. The algorithm detectsmore » and tracks feature points on moving objects resulting in partial trajectories that span coherent 3D region in the space-time volume defined by the video. In the case of multi-object motion, many possible coherent motion regions can be constructed around the set of trajectories. The unique approach in the algorithm is to identify all possible coherent motion regions, then extract a subset of motion regions based on an innovative measure to automatically locate moving objects in crowded environments.The software reports snapshot of the object, count, and derived statistics ( count over time) from input video streams. The software can directly process videos streamed over the internet or directly from a hardware device (camera).« less

  3. Edge detection

    NASA Astrophysics Data System (ADS)

    Hildreth, E. C.

    1985-09-01

    For both biological systems and machines, vision begins with a large and unwieldly array of measurements of the amount of light reflected from surfaces in the environment. The goal of vision is to recover physical properties of objects in the scene such as the location of object boundaries and the structure, color and texture of object surfaces, from the two-dimensional image that is projected onto the eye or camera. This goal is not achieved in a single step: vision proceeds in stages, with each stage producing increasingly more useful descriptions of the image and then the scene. The first clues about the physical properties of the scene are provided by the changes of intensity in the image. The importance of intensity changes and edges in early visual processing has led to extensive research on their detection, description and use, both in computer and biological vision systems. This article reviews some of the theory that underlies the detection of edges, and the methods used to carry out this analysis.

  4. Coded-aperture imaging of the Galactic center region at gamma-ray energies

    NASA Technical Reports Server (NTRS)

    Cook, Walter R.; Grunsfeld, John M.; Heindl, William A.; Palmer, David M.; Prince, Thomas A.

    1991-01-01

    The first coded-aperture images of the Galactic center region at energies above 30 keV have revealed two strong gamma-ray sources. One source has been identified with the X-ray source IE 1740.7 - 2942, located 0.8 deg away from the nucleus. If this source is at the distance of the Galactic center, it is one of the most luminous objects in the galaxy at energies from 35 to 200 keV. The second source is consistent in location with the X-ray source GX 354 + 0 (MXB 1728-34). In addition, gamma-ray flux from the location of GX 1 + 4 was marginally detected at a level consistent with other post-1980 measurements. No significant hard X-ray or gamma-ray flux was detected from the direction of the Galactic nucleus or from the direction of the recently discovered gamma-ray source GRS 1758-258.

  5. Acoustic Localization with Infrasonic Signals

    NASA Astrophysics Data System (ADS)

    Threatt, Arnesha; Elbing, Brian

    2015-11-01

    Numerous geophysical and anthropogenic events emit infrasonic frequencies (<20 Hz), including volcanoes, hurricanes, wind turbines and tornadoes. These sounds, which cannot be heard by the human ear, can be detected from large distances (in excess of 100 miles) due to low frequency acoustic signals having a very low decay rate in the atmosphere. Thus infrasound could be used for long-range, passive monitoring and detection of these events. An array of microphones separated by known distances can be used to locate a given source, which is known as acoustic localization. However, acoustic localization with infrasound is particularly challenging due to contamination from other signals, sensitivity to wind noise and producing a trusted source for system development. The objective of the current work is to create an infrasonic source using a propane torch wand or a subwoofer and locate the source using multiple infrasonic microphones. This presentation will present preliminary results from various microphone configurations used to locate the source.

  6. Buried object remote detection technology for law enforcement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Grande, N.K.; Clark, G.A.; Durbin, P.F.

    1991-03-01

    We have developed a precise airborne temperature-sensing technology to detect buried objects for use by law enforcement. Demonstrations have imaged the sites of buried foundations, walls and trenches; mapped underground waterways and aquifers; and been used to locate underground military objects. Our patented methodology is incorporated in a commercially available, high signal-to-noise, dual-band infrared scanner with real-time, 12-bit digital image processing software and display. Our method creates color-coded images based on surface temperature variations of 0.2 {degrees}C. Unlike other less-sensitive methods, it maps true (corrected) temperatures by removing the (decoupled) surface emissivity mask equivalent to 1{degrees}C or 2{degrees}C; this maskmore » hinders interpretation of apparent (blackbody) temperatures. Once removed, were are able to identify surface temperature patterns from small diffusivity changes at buried object sites which heat and cool differently from their surroundings. Objects made of different materials and buried at different depths are identified by their unique spectra, spatial, thermal, temporal, emissivity and diffusivity signatures. We have successfully located the sites of buried (inert) simulated land mines 0.1 to 0.2 m deep; sod-covered rock pathways alongside dry ditches, deeper than 0.2 m; pavement covered burial trenches and cemetery structures as deep as 0.8 m; and aquifers more than 6 m and less 60 m deep. Our technology could be adapted for drug interdiction and pollution control. 16 refs., 14 figs.« less

  7. Investigation on location dependent detectability in cone beam CT images with uniform and anatomical backgrounds

    NASA Astrophysics Data System (ADS)

    Han, Minah; Baek, Jongduk

    2017-03-01

    We investigate location dependent lesion detectability of cone beam computed tomography images for different background types (i.e., uniform and anatomical), image planes (i.e., transverse and longitudinal) and slice thicknesses. Anatomical backgrounds are generated using a power law spectrum of breast anatomy, 1/f3. Spherical object with a 5mm diameter is used as a signal. CT projection data are acquired by the forward projection of uniform and anatomical backgrounds with and without the signal. Then, projection data are reconstructed using the FDK algorithm. Detectability is evaluated by a channelized Hotelling observer with dense difference-of-Gaussian channels. For uniform background, off-centered images yield higher detectability than iso-centered images for the transverse plane, while for the longitudinal plane, detectability of iso-centered and off-centered images are similar. For anatomical background, off-centered images yield higher detectability for the transverse plane, while iso-centered images yield higher detectability for the longitudinal plane, when the slice thickness is smaller than 1.9mm. The optimal slice thickness is 3.8mm for all tasks, and the transverse plane at the off-center (iso-center and off-center) produces the highest detectability for uniform (anatomical) background.

  8. Interplanetary Dust Observations by the Juno MAG Investigation

    NASA Astrophysics Data System (ADS)

    Jørgensen, John; Benn, Mathias; Denver, Troelz; Connerney, Jack; Jørgensen, Peter; Bolton, Scott; Brauer, Peter; Levin, Steven; Oliversen, Ronald

    2017-04-01

    The spin-stabilized and solar powered Juno spacecraft recently concluded a 5-year voyage through the solar system en route to Jupiter, arriving on July 4th, 2016. During the cruise phase from Earth to the Jovian system, the Magnetometer investigation (MAG) operated two magnetic field sensors and four co-located imaging systems designed to provide accurate attitude knowledge for the MAG sensors. One of these four imaging sensors - camera "D" of the Advanced Stellar Compass (ASC) - was operated in a mode designed to detect all luminous objects in its field of view, recording and characterizing those not found in the on-board star catalog. The capability to detect and track such objects ("non-stellar objects", or NSOs) provides a unique opportunity to sense and characterize interplanetary dust particles. The camera's detection threshold was set to MV9 to minimize false detections and discourage tracking of known objects. On-board filtering algorithms selected only those objects tracked through more than 5 consecutive images and moving with an apparent angular rate between 15"/s and 10,000"/s. The coordinates (RA, DEC), intensity, and apparent velocity of such objects were stored for eventual downlink. Direct detection of proximate dust particles is precluded by their large (10-30 km/s) relative velocity and extreme angular rates, but their presence may be inferred using the collecting area of Juno's large ( 55m2) solar arrays. Dust particles impact the spacecraft at high velocity, creating an expanding plasma cloud and ejecta with modest (few m/s) velocities. These excavated particles are revealed in reflected sunlight and tracked moving away from the spacecraft from the point of impact. Application of this novel detection method during Juno's traversal of the solar system provides new information on the distribution of interplanetary (µm-sized) dust.

  9. Multi-Objective Community Detection Based on Memetic Algorithm

    PubMed Central

    2015-01-01

    Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646

  10. Multi-objective community detection based on memetic algorithm.

    PubMed

    Wu, Peng; Pan, Li

    2015-01-01

    Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.

  11. Detection of Common Respiratory Viruses and Mycoplasma pneumoniae in Patient-Occupied Rooms in Pediatric Wards.

    PubMed

    Wan, Gwo-Hwa; Huang, Chung-Guei; Chung, Fen-Fang; Lin, Tzou-Yien; Tsao, Kuo-Chien; Huang, Yhu-Chering

    2016-04-01

    Few studies have assessed viral contamination in the rooms of hospital wards. This cross-sectional study evaluated the air and objects in patient-occupied rooms in pediatric wards for the presence of common respiratory viruses and Mycoplasma pneumoniae.Air samplers were placed at a short (60-80 cm) and long (320 cm) distance from the head of the beds of 58 pediatric patients, who were subsequently confirmed to be infected with enterovirus (n = 17), respiratory syncytial virus (RSV) (n = 13), influenza A virus (n = 13), adenovirus (n = 9), or M pneumoniae (n = 6). Swab samples were collected from the surfaces of 5 different types of objects in the patients' rooms. All air and swab samples were analyzed via real-time quantitative polymerase chain reaction assay for the presence of the above pathogens.All pathogens except enterovirus were detected in the air, on the objects, or in both locations in the patients' rooms. The detection rates of influenza A virus, adenovirus, and M pneumoniae for the long distance air sampling were 15%, 67%, and 17%, respectively. Both adenovirus and M pneumoniae were detected at very high rates, with high concentrations, on all sampled objects.The respiratory pathogens RSV, influenza A virus, adenovirus, and M pneumoniae were detected in the air and/or on the objects in the pediatric ward rooms. Appropriate infection control measures should be strictly implemented when caring for such patients.

  12. Role of early visual cortex in trans-saccadic memory of object features.

    PubMed

    Malik, Pankhuri; Dessing, Joost C; Crawford, J Douglas

    2015-08-01

    Early visual cortex (EVC) participates in visual feature memory and the updating of remembered locations across saccades, but its role in the trans-saccadic integration of object features is unknown. We hypothesized that if EVC is involved in updating object features relative to gaze, feature memory should be disrupted when saccades remap an object representation into a simultaneously perturbed EVC site. To test this, we applied transcranial magnetic stimulation (TMS) over functional magnetic resonance imaging-localized EVC clusters corresponding to the bottom left/right visual quadrants (VQs). During experiments, these VQs were probed psychophysically by briefly presenting a central object (Gabor patch) while subjects fixated gaze to the right or left (and above). After a short memory interval, participants were required to detect the relative change in orientation of a re-presented test object at the same spatial location. Participants either sustained fixation during the memory interval (fixation task) or made a horizontal saccade that either maintained or reversed the VQ of the object (saccade task). Three TMS pulses (coinciding with the pre-, peri-, and postsaccade intervals) were applied to the left or right EVC. This had no effect when (a) fixation was maintained, (b) saccades kept the object in the same VQ, or (c) the EVC quadrant corresponding to the first object was stimulated. However, as predicted, TMS reduced performance when saccades (especially larger saccades) crossed the remembered object location and brought it into the VQ corresponding to the TMS site. This suppression effect was statistically significant for leftward saccades and followed a weaker trend for rightward saccades. These causal results are consistent with the idea that EVC is involved in the gaze-centered updating of object features for trans-saccadic memory and perception.

  13. Buildings Change Detection Based on Shape Matching for Multi-Resolution Remote Sensing Imagery

    NASA Astrophysics Data System (ADS)

    Abdessetar, M.; Zhong, Y.

    2017-09-01

    Buildings change detection has the ability to quantify the temporal effect, on urban area, for urban evolution study or damage assessment in disaster cases. In this context, changes analysis might involve the utilization of the available satellite images with different resolutions for quick responses. In this paper, to avoid using traditional method with image resampling outcomes and salt-pepper effect, building change detection based on shape matching is proposed for multi-resolution remote sensing images. Since the object's shape can be extracted from remote sensing imagery and the shapes of corresponding objects in multi-scale images are similar, it is practical for detecting buildings changes in multi-scale imagery using shape analysis. Therefore, the proposed methodology can deal with different pixel size for identifying new and demolished buildings in urban area using geometric properties of objects of interest. After rectifying the desired multi-dates and multi-resolutions images, by image to image registration with optimal RMS value, objects based image classification is performed to extract buildings shape from the images. Next, Centroid-Coincident Matching is conducted, on the extracted building shapes, based on the Euclidean distance measurement between shapes centroid (from shape T0 to shape T1 and vice versa), in order to define corresponding building objects. Then, New and Demolished buildings are identified based on the obtained distances those are greater than RMS value (No match in the same location).

  14. Contaminants of emerging concern in the Great Lakes Basin: A report on sediment, water, and fish tissue chemistry collected in 2010-2012

    USGS Publications Warehouse

    Choy, Steven J.; Annis, Mandy L.; Banda, JoAnn; Bowman, Sarah R.; Brigham, Mark E.; Elliott, Sarah M.; Gefell, Daniel J.; Jankowski, Mark D.; Jorgenson, Zachary G.; Lee, Kathy E.; Moore, Jeremy N.; Tucker, William A.

    2017-01-01

    Despite being detected at low levels in surface waters and sediments across the United States, contaminants of emerging concern (CECs) in the Great Lakes Basin are not well characterized in terms of spatial and temporal occurrence. Additionally, although the detrimental effects of exposure to CECs on fish and wildlife have been documented for many CECs in laboratory studies, we do not adequately understand the implications of the presence of CECs in the environment. Based on limited studies using current environmentally relevant concentrations of chemicals, however, risks to fish and wildlife are evident. As a result, there is an increasing urgency to address data gaps that are vital to resource management decisions. The U.S. Fish and Wildlife Service, in collaboration with the U.S. Geological Survey, is leading a Great Lakes Basin-wide evaluation of CECs (CEC Project) with the objectives to (a) characterize the spatial and temporal distribution of CECs; (b) evaluate risks to fish and wildlife resources; and (c) develop tools to aid resource managers in detecting, averting, or minimizing the ecological consequences to fish and wildlife that are exposed to CECs. This report addresses objective (a) of the CEC Project, summarizing sediment and water chemistry data collected from 2010 to 2012 and fish liver tissue chemistry data collected in 2012; characterizes the sampling locations with respect to potential sources of CECs in the landscape; and provides an initial interpretation of the variation in CEC concentrations relative to the identified sources. Data collected during the first three years of our study, which included 12 sampling locations and analysis of 134 chemicals, indicate that contaminants were more frequently detected in sediment compared to water. Chemicals classified as alkyphenols, flavors/ fragrances, hormones, PAHs, and sterols had higher average detection frequencies in sediment compared to water, while the opposite was observed for pesticides, pharmaceuticals, and plasticizers/flame retardants. The St. Louis River and Maumee River sampling locations had the most CEC detections in water and sediment, relative to other sites, as well as the largest number of maximum detected concentrations across all sites in the Basin. No consistent temporal CEC occurrence patterns were observed at locations sampled multiple times each day. Most appearances and increases in chemical concentrations in sediments occurred at sites immediately downstream from wastewater treatment plants and at sites with predominantly developed land use. The location with the most observed appearances and increases was the St. Louis River. Perfluorinated compounds were commonly detected in fish liver tissues with detections in 100% of both benthic and pelagic species. The occurrence of these chemicals in liver tissue of benthic and pelagic species was generally similar. Abstract

  15. A novel procedure for detecting and focusing moving objects with SAR based on the Wigner-Ville distribution

    NASA Astrophysics Data System (ADS)

    Barbarossa, S.; Farina, A.

    A novel scheme for detecting moving targets with synthetic aperture radar (SAR) is presented. The proposed approach is based on the use of the Wigner-Ville distribution (WVD) for simultaneously detecting moving targets and estimating their motion kinematic parameters. The estimation plays a key role for focusing the target and correctly locating it with respect to the stationary background. The method has a number of advantages: (i) the detection is efficiently performed on the samples in the time-frequency domain, provided the WVD, without resorting to the use of a bank of filters, each one matched to possible values of the unknown target motion parameters; (ii) the estimation of the target motion parameters can be done on the same time-frequency domain by locating the line where the maximum energy of the WVD is concentrated. A validation of the approach is given by both analytical and simulation means. In addition, the estimation of the target kinematic parameters and the corresponding image focusing are also demonstrated.

  16. Making the invisible visible: verbal but not visual cues enhance visual detection.

    PubMed

    Lupyan, Gary; Spivey, Michael J

    2010-07-07

    Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  17. Red-shouldered hawk occupancy surveys in central Minnesota, USA

    USGS Publications Warehouse

    Henneman, C.; McLeod, M.A.; Andersen, D.E.

    2007-01-01

    Forest-dwelling raptors are often difficult to detect because many species occur at low density or are secretive. Broadcasting conspecific vocalizations can increase the probability of detecting forest-dwelling raptors and has been shown to be an effective method for locating raptors and assessing their relative abundance. Recent advances in statistical techniques based on presence-absence data use probabilistic arguments to derive probability of detection when it is <1 and to provide a model and likelihood-based method for estimating proportion of sites occupied. We used these maximum-likelihood models with data from red-shouldered hawk (Buteo lineatus) call-broadcast surveys conducted in central Minnesota, USA, in 1994-1995 and 2004-2005. Our objectives were to obtain estimates of occupancy and detection probability 1) over multiple sampling seasons (yr), 2) incorporating within-season time-specific detection probabilities, 3) with call type and breeding stage included as covariates in models of probability of detection, and 4) with different sampling strategies. We visited individual survey locations 2-9 times per year, and estimates of both probability of detection (range = 0.28-0.54) and site occupancy (range = 0.81-0.97) varied among years. Detection probability was affected by inclusion of a within-season time-specific covariate, call type, and breeding stage. In 2004 and 2005 we used survey results to assess the effect that number of sample locations, double sampling, and discontinued sampling had on parameter estimates. We found that estimates of probability of detection and proportion of sites occupied were similar across different sampling strategies, and we suggest ways to reduce sampling effort in a monitoring program.

  18. Development of an inverse distance weighted active infrared stealth scheme using the repulsive particle swarm optimization algorithm.

    PubMed

    Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk

    2018-04-20

    Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.

  19. System of technical vision for autonomous unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Bondarchuk, A. S.

    2018-05-01

    This paper is devoted to the implementation of image recognition algorithm using the LabVIEW software. The created virtual instrument is designed to detect the objects on the frames from the camera mounted on the UAV. The trained classifier is invariant to changes in rotation, as well as to small changes in the camera's viewing angle. Finding objects in the image using particle analysis, allows you to classify regions of different sizes. This method allows the system of technical vision to more accurately determine the location of the objects of interest and their movement relative to the camera.

  20. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.

    PubMed

    Mateo, Carlos M; Gil, Pablo; Torres, Fernando

    2016-05-05

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.

  1. The Clusters AgeS Experiment (CASE). Variable Stars in the Field of the Globular Cluster NGC 3201

    NASA Astrophysics Data System (ADS)

    Kaluzny, J.; Rozyczka, M.; Thompson, I. B.; Narloch, W.; Mazur, B.; Pych, W.; Schwarzenberg-Czerny, A.

    2016-01-01

    The field of the globular cluster NGC 3201 was monitored between 1998 and 2009 in a search for variable stars. BV light curves were obtained for 152 periodic or likely periodic variables, fifty-seven of which are new detections. Thirty-seven newly detected variables are proper motion members of the cluster. Among them we found seven detached or semi-detached eclipsing binaries, four contact binaries, and eight SX Phe pulsators. Four of the eclipsing binaries are located in the turnoff region, one on the lower main sequence and the remaining two slightly above the subgiant branch. Two contact systems are blue stragglers, and another two reside in the turnoff region. In the blue straggler region a total of 266 objects were found, of which 140 are proper motion (PM) members of NGC 3201, and another nineteen are field stars. Seventy-eight of the remaining objects for which we do not have PM data are located within the half-light radius from the center of the cluster, and most of them are likely genuine blue stragglers. Four variable objects in our field of view were found to coincide with X-ray sources: three chromospherically active stars and a quasar at a redshift z≍0.5.

  2. Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus

    NASA Astrophysics Data System (ADS)

    Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.

    2014-09-01

    There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is used to assess the performance of the algorithm. In the second application, a visible imager operated in sidereal mode observes geostationary objects as moving, stars as fixed except for field rotation, and non-geostationary objects as drifting. RANSAC-MT is used to detect the drifter. In this set of data, the drifting space object was detected at a distance of 13800 km. The AFRL/RH set of data, collected in the stare mode, contained the signature of two geostationary satellites. The signature of a moving object was simulated and added to the sequence of frames to determine the sensitivity in magnitude. The performance compares well with the more intensive TBD algorithms reported in the literature.

  3. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification

    PubMed Central

    Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard

    2016-01-01

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment. PMID:27922592

  4. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification

    NASA Astrophysics Data System (ADS)

    Bradbury, Kyle; Saboo, Raghav; L. Johnson, Timothy; Malof, Jordan M.; Devarajan, Arjun; Zhang, Wuming; M. Collins, Leslie; G. Newell, Richard

    2016-12-01

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.

  5. Distributed solar photovoltaic array location and extent dataset for remote sensing object identification.

    PubMed

    Bradbury, Kyle; Saboo, Raghav; L Johnson, Timothy; Malof, Jordan M; Devarajan, Arjun; Zhang, Wuming; M Collins, Leslie; G Newell, Richard

    2016-12-06

    Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.

  6. The Role of Optimality in Characterizing CO2 Seepage from Geological Carbon Sequestration Sites

    NASA Astrophysics Data System (ADS)

    Cortis, A.; Oldenburg, C. M.; Benson, S. M.

    2007-12-01

    Storage of large amounts of carbon dioxide (CO2) in deep geological formations for greenhouse-gas mitigation is gaining momentum and moving from its conceptual and testing stages towards widespread application. In this talk we explore various optimization strategies for characterizing surface leakage (seepage) using near-surface measurement approaches such as accumulation chambers and eddy covariance towers. Seepage characterization objectives and limitations need to be defined carefully from the outset especially in light of large natural background variations that can mask seepage. The cost and sensitivity of seepage detection are related to four critical length scales pertaining to the size of the: (1) region that needs to be monitored; (2) footprint of the measurement approach; (3) main seepage zone; and (4) region in which concentrations or fluxes are influenced by seepage. Seepage characterization objectives may include one or all of the tasks of detecting, locating, and quantifying seepage. Each of these tasks has its own optimal strategy. Detecting and locating seepage in a region in which there is no expected or preferred location for seepage nor existing evidence for seepage requires monitoring on a fixed grid, e.g., using eddy covariance towers. The fixed-grid approaches needed to detect seepage are expected to require large numbers of eddy covariance towers for large-scale geologic CO2 storage. Once seepage has been detected and roughly located, seepage zones and features can be optimally pinpointed through a dynamic search strategy, e.g., employing accumulation chambers and/or soil-gas sampling. Quantification of seepage rates can be done through measurements on a localized fixed grid once the seepage is pinpointed. Background measurements are essential for seepage detection in natural ecosystems. Artificial neural networks are considered as regression models useful for distinguishing natural system behavior from anomalous behavior suggestive of CO2 seepage without need for detailed understanding of natural system processes. Because of the local extrema in CO2 fluxes and concentrations in natural systems, simple steepest-descent algorithms are not effective and evolutionary computation algorithms are proposed as a paradigm for dynamic monitoring networks to pinpoint CO2 seepage areas. This work was carried out within the ZERT project, funded by the Assistant Secretary for Fossil Energy, Office of Sequestration, Hydrogen, and Clean Coal Fuels, National Energy Technology Laboratory, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

  7. Development and evaluation of modified envelope correlation method for deep tectonic tremor

    NASA Astrophysics Data System (ADS)

    Mizuno, N.; Ide, S.

    2017-12-01

    We develop a new location method for deep tectonic tremors, as an improvement of widely used envelope correlation method, and applied it to construct a tremor catalog in western Japan. Using the cross-correlation functions as objective functions and weighting components of data by the inverse of error variances, the envelope cross-correlation method is redefined as a maximum likelihood method. This method is also capable of multiple source detection, because when several events occur almost simultaneously, they appear as local maxima of likelihood.The average of weighted cross-correlation functions, defined as ACC, is a nonlinear function whose variable is a position of deep tectonic tremor. The optimization method has two steps. First, we fix the source depth to 30 km and use a grid search with 0.2 degree intervals to find the maxima of ACC, which are candidate event locations. Then, using each of the candidate locations as initial values, we apply a gradient method to determine horizontal and vertical components of a hypocenter. Sometimes, several source locations are determined in a time window of 5 minutes. We estimate the resolution, which is defined as a distance of sources to be detected separately by the location method, is about 100 km. The validity of this estimation is confirmed by a numerical test using synthetic waveforms. Applying to continuous seismograms in western Japan for over 10 years, the new method detected 27% more tremors than a previous method, owing to the multiple detection and improvement of accuracy by appropriate weighting scheme.

  8. Locations of Sampling Stations for Water Quality Monitoring in Water Distribution Networks.

    PubMed

    Rathi, Shweta; Gupta, Rajesh

    2014-04-01

    Water quality is required to be monitored in the water distribution networks (WDNs) at salient locations to assure the safe quality of water supplied to the consumers. Such monitoring stations (MSs) provide warning against any accidental contaminations. Various objectives like demand coverage, time for detection, volume of water contaminated before detection, extent of contamination, expected population affected prior to detection, detection likelihood and others, have been independently or jointly considered in determining optimal number and location of MSs in WDNs. "Demand coverage" defined as the percentage of network demand monitored by a particular monitoring station is a simple measure to locate MSs. Several methods based on formulation of coverage matrix using pre-specified coverage criteria and optimization have been suggested. Coverage criteria is defined as some minimum percentage of total flow received at the monitoring stations that passed through any upstream node included then as covered node of the monitoring station. Number of monitoring stations increases with the increase in the value of coverage criteria. Thus, the design of monitoring station becomes subjective. A simple methodology is proposed herein which priority wise iteratively selects MSs to achieve targeted demand coverage. The proposed methodology provided the same number and location of MSs for illustrative network as an optimization method did. Further, the proposed method is simple and avoids subjectivity that could arise from the consideration of coverage criteria. The application of methodology is also shown on a WDN of Dharampeth zone (Nagpur city WDN in Maharashtra, India) having 285 nodes and 367 pipes.

  9. Sex pheromone source location by garter snakes: : A mechanism for detection of direction in nonvolatile trails.

    PubMed

    Ford, N B; Low, J R

    1984-08-01

    Male plains garter snakes,Thamnophis radix, tested in a 240-cm-long arena can detect directional information from a female pheromone trail only when the female is allowed to push against pegs while laying the trail. The female's normal locomotor activity apparently deposits pheromone on the anterolateral surfaces of vertical structures in her environment. The male sensorily assays the sides of these objects and from this information determines the female's direction of travel.

  10. Locating single-point sources from arrival times containing large picking errors (LPEs): the virtual field optimization method (VFOM)

    NASA Astrophysics Data System (ADS)

    Li, Xi-Bing; Wang, Ze-Wei; Dong, Long-Jun

    2016-01-01

    Microseismic monitoring systems using local location techniques tend to be timely, automatic and stable. One basic requirement of these systems is the automatic picking of arrival times. However, arrival times generated by automated techniques always contain large picking errors (LPEs), which may make the location solution unreliable and cause the integrated system to be unstable. To overcome the LPE issue, we propose the virtual field optimization method (VFOM) for locating single-point sources. In contrast to existing approaches, the VFOM optimizes a continuous and virtually established objective function to search the space for the common intersection of the hyperboloids, which is determined by sensor pairs other than the least residual between the model-calculated and measured arrivals. The results of numerical examples and in-site blasts show that the VFOM can obtain more precise and stable solutions than traditional methods when the input data contain LPEs. Furthermore, we discuss the impact of LPEs on objective functions to determine the LPE-tolerant mechanism, velocity sensitivity and stopping criteria of the VFOM. The proposed method is also capable of locating acoustic sources using passive techniques such as passive sonar detection and acoustic emission.

  11. The objects of visuospatial short-term memory: Perceptual organization and change detection.

    PubMed

    Nikolova, Atanaska; Macken, Bill

    2016-01-01

    We used a colour change-detection paradigm where participants were required to remember colours of six equally spaced circles. Items were superimposed on a background so as to perceptually group them within (a) an intact ring-shaped object, (b) a physically segmented but perceptually completed ring-shaped object, or (c) a corresponding background segmented into three arc-shaped objects. A nonpredictive cue at the location of one of the circles was followed by the memory items, which in turn were followed by a test display containing a probe indicating the circle to be judged same/different. Reaction times for correct responses revealed a same-object advantage; correct responses were faster to probes on the same object as the cue than to equidistant probes on a segmented object. This same-object advantage was identical for physically and perceptually completed objects, but was only evident in reaction times, and not in accuracy measures. Not only, therefore, is it important to consider object-level perceptual organization of stimulus elements when assessing the influence of a range of factors (e.g., number and complexity of elements) in visuospatial short-term memory, but a more detailed picture of the structure of information in memory may be revealed by measuring speed as well as accuracy.

  12. The objects of visuospatial short-term memory: Perceptual organization and change detection

    PubMed Central

    Nikolova, Atanaska; Macken, Bill

    2016-01-01

    We used a colour change-detection paradigm where participants were required to remember colours of six equally spaced circles. Items were superimposed on a background so as to perceptually group them within (a) an intact ring-shaped object, (b) a physically segmented but perceptually completed ring-shaped object, or (c) a corresponding background segmented into three arc-shaped objects. A nonpredictive cue at the location of one of the circles was followed by the memory items, which in turn were followed by a test display containing a probe indicating the circle to be judged same/different. Reaction times for correct responses revealed a same-object advantage; correct responses were faster to probes on the same object as the cue than to equidistant probes on a segmented object. This same-object advantage was identical for physically and perceptually completed objects, but was only evident in reaction times, and not in accuracy measures. Not only, therefore, is it important to consider object-level perceptual organization of stimulus elements when assessing the influence of a range of factors (e.g., number and complexity of elements) in visuospatial short-term memory, but a more detailed picture of the structure of information in memory may be revealed by measuring speed as well as accuracy. PMID:26286369

  13. Detecting abandoned objects using interacting multiple models

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Münch, David; Kieritz, Hilke; Hübner, Wolfgang; Arens, Michael

    2015-10-01

    In recent years, the wide use of video surveillance systems has caused an enormous increase in the amount of data that has to be stored, monitored, and processed. As a consequence, it is crucial to support human operators with automated surveillance applications. Towards this end an intelligent video analysis module for real-time alerting in case of abandoned objects in public spaces is proposed. The overall processing pipeline consists of two major parts. First, person motion is modeled using an Interacting Multiple Model (IMM) filter. The IMM filter estimates the state of a person according to a finite-state, discrete-time Markov chain. Second, the location of persons that stay at a fixed position defines a region of interest, in which a nonparametric background model with dynamic per-pixel state variables identifies abandoned objects. In case of a detected abandoned object, an alarm event is triggered. The effectiveness of the proposed system is evaluated on the PETS 2006 dataset and the i-Lids dataset, both reflecting prototypical surveillance scenarios.

  14. Rapid shape detection signals in area V4

    PubMed Central

    Weiner, Katherine F.; Ghose, Geoffrey M.

    2014-01-01

    Vision in foveate animals is an active process that requires rapid and constant decision-making. For example, when a new object appears in the visual field, we can quickly decide to inspect it by directing our eyes to the object's location. We studied the contribution of primate area V4 to these types of rapid foveation decisions. Animals performed a reaction time task that required them to report when any shape appeared within a peripherally-located noisy stimulus by making a saccade to the stimulus location. We found that about half of the randomly sampled V4 neurons not only rapidly and precisely represented the appearance of this shape, but they were also predictive of the animal's saccades. A neuron's ability to predict the animal's saccades was not related to the specificity with which the cell represented a single type of shape but rather to its ability to signal whether any shape was present. This relationship between sensory sensitivity and behavioral predictiveness was not due to global effects such as alertness, as it was equally likely to be observed for cells with increases and decreases in firing rate. Careful analysis of the timescales of reliability in these neurons implies that they reflect both feedforward and feedback shape detecting processes. In approximately 7% of our recorded sample, individual neurons were able to predict both the delay and precision of the animal's shape detection performance. This suggests that a subset of V4 neurons may have been directly and causally contributing to task performance and that area V4 likely plays a critical role in guiding rapid, form-based foveation decisions. PMID:25278828

  15. The positional-specificity effect reveals a passive-trace contribution to visual short-term memory.

    PubMed

    Postle, Bradley R; Awh, Edward; Serences, John T; Sutterer, David W; D'Esposito, Mark

    2013-01-01

    The positional-specificity effect refers to enhanced performance in visual short-term memory (VSTM) when the recognition probe is presented at the same location as had been the sample, even though location is irrelevant to the match/nonmatch decision. We investigated the mechanisms underlying this effect with behavioral and fMRI studies of object change-detection performance. To test whether the positional-specificity effect is a direct consequence of active storage in VSTM, we varied memory load, reasoning that it should be observed for all objects presented in a sub-span array of items. The results, however, indicated that although robust with a memory load of 1, the positional-specificity effect was restricted to the second of two sequentially presented sample stimuli in a load-of-2 experiment. An additional behavioral experiment showed that this disruption wasn't due to the increased load per se, because actively processing a second object--in the absence of a storage requirement--also eliminated the effect. These behavioral findings suggest that, during tests of object memory, position-related information is not actively stored in VSTM, but may be retained in a passive tag that marks the most recent site of selection. The fMRI data were consistent with this interpretation, failing to find location-specific bias in sustained delay-period activity, but revealing an enhanced response to recognition probes that matched the location of that trial's sample stimulus.

  16. Railway clearance intrusion detection method with binocular stereo vision

    NASA Astrophysics Data System (ADS)

    Zhou, Xingfang; Guo, Baoqing; Wei, Wei

    2018-03-01

    In the stage of railway construction and operation, objects intruding railway clearance greatly threaten the safety of railway operation. Real-time intrusion detection is of great importance. For the shortcomings of depth insensitive and shadow interference of single image method, an intrusion detection method with binocular stereo vision is proposed to reconstruct the 3D scene for locating the objects and judging clearance intrusion. The binocular cameras are calibrated with Zhang Zhengyou's method. In order to improve the 3D reconstruction speed, a suspicious region is firstly determined by background difference method of a single camera's image sequences. The image rectification, stereo matching and 3D reconstruction process are only executed when there is a suspicious region. A transformation matrix from Camera Coordinate System(CCS) to Track Coordinate System(TCS) is computed with gauge constant and used to transfer the 3D point clouds into the TCS, then the 3D point clouds are used to calculate the object position and intrusion in TCS. The experiments in railway scene show that the position precision is better than 10mm. It is an effective way for clearance intrusion detection and can satisfy the requirement of railway application.

  17. Detection of Optically Faint GEO Debris

    NASA Technical Reports Server (NTRS)

    Seitzer, P.; Lederer, S.; Barker, E.; Cowardin, H.; Abercromby, K.; Silha, J.; Burkhardt, A.

    2014-01-01

    There have been extensive optical surveys for debris at geosynchronous orbit (GEO) conducted with meter-class telescopes, such as those conducted with MODEST (the Michigan Orbital DEbris Survey Telescope, a 0.6-m telescope located at Cerro Tololo in Chile), and the European Space Agency's 1.0-m space debris telescope (SDT) in the Canary Islands. These surveys have detection limits in the range of 18th or 19th magnitude, which corresponds to sizes larger than 10 cm assuming an albedo of 0.175. All of these surveys reveal a substantial population of objects fainter than R = 15th magnitude that are not in the public U.S. Satellite Catalog. To detect objects fainter than 20th magnitude (and presumably smaller than 10 cm) in the visible requires a larger telescope and excellent imaging conditions. This combination is available in Chile. NASA's Orbital Debris Program Office has begun collecting orbital debris observations with the 6.5-m (21.3-ft diameter) "Walter Baade" Magellan telescope at Las Campanas Observatory. The goal is to detect objects as faint as possible from a ground-based observatory and begin to understand the brightness distribution of GEO debris fainter than R = 20th magnitude.

  18. Genetic algorithm for investigating flight MH370 in Indian Ocean using remotely sensed data

    NASA Astrophysics Data System (ADS)

    Marghany, Maged; Mansor, Shattri; Shariff, Abdul Rashid Bin Mohamed

    2016-06-01

    This study utilized Genetic algorithm (GA) for automatic detection and simulation trajectory movements of flight MH370 debris. In doing so, the Ocean Surface Topography Mission(OSTM) on the Jason- 2 satellite have been used within 1 and half year covers data to simulate the pattern of Flight MH370 debris movements across the southern Indian Ocean. Further, multi-objectives evolutionary algorithm also used to discriminate uncertainty of flight MH370 imagined and detection. The study shows that the ocean surface current speed is 0.5 m/s. This current patterns have developed a large anticlockwise gyre over a water depth of 8,000 m. The multi-objectives evolutionary algorithm suggested that objects are existed on satellite data are not flight MH370 debris. In addition, multiobjectives evolutionary algorithm suggested that the difficulties to acquire the exact location of flight MH370 due to complicated hydrodynamic movements across the southern Indian Ocean.

  19. Threat captures attention but does not affect learning of contextual regularities.

    PubMed

    Yamaguchi, Motonori; Harwood, Sarah L

    2017-04-01

    Some of the stimulus features that guide visual attention are abstract properties of objects such as potential threat to one's survival, whereas others are complex configurations such as visual contexts that are learned through past experiences. The present study investigated the two functions that guide visual attention, threat detection and learning of contextual regularities, in visual search. Search arrays contained images of threat and non-threat objects, and their locations were fixed on some trials but random on other trials. Although they were irrelevant to the visual search task, threat objects facilitated attention capture and impaired attention disengagement. Search time improved for fixed configurations more than for random configurations, reflecting learning of visual contexts. Nevertheless, threat detection had little influence on learning of the contextual regularities. The results suggest that factors guiding visual attention are different from factors that influence learning to guide visual attention.

  20. Implementation of jump-diffusion algorithms for understanding FLIR scenes

    NASA Astrophysics Data System (ADS)

    Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.

    1995-07-01

    Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.

  1. Millisecond Pulsar Companions in SDSS and Pan-Starrs

    NASA Astrophysics Data System (ADS)

    McMann, Natasha; Holley-Bockelmann, Kelly; McLaughlin, Maura; Kaplan, David; NANOGrav

    2018-01-01

    Millisecond pulsars (MSPs) are being timed precisely in hopes of detecting gravitational waves (GWs). In order to detect GWs, pulsars must be studied in great detail. The perturbations in timing caused by binaries must be determined so as not to confuse them with a GW perturbation. This study used a list of published MSPs to determine if any known MSP’s white dwarf companions are located and visible in the Sloan Digital Sky Survey (SDSS) and the Panoramic Survey Telescope and Rapid Response System (Pan-Starrs) Footprints. No new possible companions were discovered but five objects were found in the SDSS and 18, including the same five from SDSS, were found in Pan-Starrs that could be the companion to an MSP. All objects are less than 1.5 arcseconds away from the MSP’s position. In order to verify the object as the companion, the color magnitudes must be compared to those previously published.

  2. Trainable Cataloging for Digital Image Libraries with Applications to Volcano Detection

    NASA Technical Reports Server (NTRS)

    Burl, M. C.; Fayyad, U. M.; Perona, P.; Smyth, P.

    1995-01-01

    Users of digital image libraries are often not interested in image data per se but in derived products such as catalogs of objects of interest. Converting an image database into a usable catalog is typically carried out manually at present. For many larger image databases the purely manual approach is completely impractical. In this paper we describe the development of a trainable cataloging system: the user indicates the location of the objects of interest for a number of training images and the system learns to detect and catalog these objects in the rest of the database. In particular we describe the application of this system to the cataloging of small volcanoes in radar images of Venus. The volcano problem is of interest because of the scale (30,000 images, order of 1 million detectable volcanoes), technical difficulty (the variability of the volcanoes in appearance) and the scientific importance of the problem. The problem of uncertain or subjective ground truth is of fundamental importance in cataloging problems of this nature and is discussed in some detail. Experimental results are presented which quantify and compare the detection performance of the system relative to human detection performance. The paper concludes by discussing the limitations of the proposed system and the lessons learned of general relevance to the development of digital image libraries.

  3. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    PubMed Central

    Mateo, Carlos M.; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID:27164102

  4. Accessing long-term memory representations during visual change detection.

    PubMed

    Beck, Melissa R; van Lamsweerde, Amanda E

    2011-04-01

    In visual change detection tasks, providing a cue to the change location concurrent with the test image (post-cue) can improve performance, suggesting that, without a cue, not all encoded representations are automatically accessed. Our studies examined the possibility that post-cues can encourage the retrieval of representations stored in long-term memory (LTM). Participants detected changes in images composed of familiar objects. Performance was better when the cue directed attention to the post-change object. Supporting the role of LTM in the cue effect, the effect was similar regardless of whether the cue was presented during the inter-stimulus interval, concurrent with the onset of the test image, or after the onset of the test image. Furthermore, the post-cue effect and LTM performance were similarly influenced by encoding time. These findings demonstrate that monitoring the visual world for changes does not automatically engage LTM retrieval.

  5. Discrimination Report: A Multisensor system for detection andcharacterization of UXO, ESTCP Project MM-0437,

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gasperikova, Erika; Smith, J. Torquil; Morrison, H.Frank

    2008-01-14

    The Berkeley UXO Discriminator (BUD) is an optimally designed active electromagnetic system that not only detects but also characterizes UXO. The performance of the system is governed by a target size-depth curve. BUD was designed to detect UXO in the 20 mm to 155 mm size range for depths between 0 and 1.5 m, and to characterize them in a depth range from 0 to 1.1 m. The system incorporates three orthogonal transmitters and eight pairs of differenced receivers. Eight receiver coils are placed horizontally along the two diagonals of the upper and lower planes of the two horizontal transmittermore » loops. These receiver coil pairs are located on symmetry lines through the center of the system and each pair sees identical fields during the on-time of the pulse in all of the transmitter coils. They are wired in opposition to produce zero output during the on-time of the pulses in three orthogonal transmitters. Moreover, this configuration dramatically reduces noise in the measurements by canceling the background electromagnetic fields (these fields are uniform over the scale of the receiver array and are consequently nulled by the differencing operation), and by canceling the noise contributed by the tilt motion of the receivers in the Earth's magnetic field, and greatly enhances receiver sensitivity to the gradients of the target response. BUD is mounted on a small cart to assure system mobility. System positioning is provided by a Real Time Kinematic (RTK) GPS receiver. The system has two modes of operation: (1) the search mode, in which BUD moves along a profile and exclusively detects targets in its vicinity providing target depth and horizontal location, and (2) the discrimination mode, in which BUD is stationary above a target, and determines three discriminating polarizability responses together with the object location and orientation from a single position of the system. The detection performance of the system is governed by a size-depth curve shown in Figure 2. This curve was calculated for BUD assuming that the receiver plane is 0.2 m above the ground. Figure 2 shows that, for example, BUD can detect an object with 0.1 m diameter down to the depth of 0.9 m with a depth uncertainty of 10%. Any objects buried at a depth of more than 1.3 m will have a low probability of detection. The discrimination performance of the system is governed by a size-depth curve shown in Figure 3. Again, this curve was calculated for BUD assuming that the receiver plane is 0.2 m above the ground. Figure 3 shows that, for example, BUD can determine the polarizability of an object with 0.1 m diameter down to the depth of 0.63 m with polarizability uncertainty of 10%. Any objects buried at the depth more than 0.9 m will have a low discrimination probability. Object orientation estimates and equivalent dipole polarizability estimates used for large and shallow UXO/scrap discrimination are more problematic as they are affected by higher order (non-dipole) terms induced in objects due to source field gradients along the length of the objects. For example, a vertical 0.4 m object directly below the system needs to be about 0.90 m deep for perturbations due to gradients along the length of the object to be of the order of 20% of the uniform field object response. Similarly, vertical objects 0.5 m, and 0.6 m long need to be 1.15 m, and 1.42 m, respectively, below the system. For horizontal objects the effect of gradients across the object diameter are much smaller. For example, 155 mm and 105 mm projectiles need to be only 0.30 m, and 0.19 m, respectively, below the system. A polarizability index (in cm{sup 3}), which is an average value of the product of time (in seconds) and polarizability rate (in m{sup 3}/s) over the 34 sample times logarithmically spaced from 143 to 1300 {micro}s, and three polarizabilities, can be calculated for any object. We used this polarizability index to decide when the object is in a uniform source field. Objects with the polarizability index smaller than 600 cm{sup 3} and deeper than 1.8 m below BUD, or smaller than 200 cm{sup 3} and deeper than 1.35 m, or smaller than 80 cm{sup 3} and deeper than 0.90 m, or smaller than 9 cm{sup 3} and deeper than 0.20 m below BUD are sufficiently deep that the effects of vertical source field gradients should be less than 15%. All other objects are considered large and shallow objects. At the moment, interpretation software is available for a single object only. In case of multiple objects the software indicates the possible presence of metallic objects but is unable to provide characteristics of each individual object.« less

  6. The feasibility test of state-of-the-art face detection algorithms for vehicle occupant detection

    NASA Astrophysics Data System (ADS)

    Makrushin, Andrey; Dittmann, Jana; Vielhauer, Claus; Langnickel, Mirko; Kraetzer, Christian

    2010-01-01

    Vehicle seat occupancy detection systems are designed to prevent the deployment of airbags at unoccupied seats, thus avoiding the considerable cost imposed by the replacement of airbags. Occupancy detection can also improve passenger comfort, e.g. by activating air-conditioning systems. The most promising development perspectives are seen in optical sensing systems which have become cheaper and smaller in recent years. The most plausible way to check the seat occupancy by occupants is the detection of presence and location of heads, or more precisely, faces. This paper compares the detection performances of the three most commonly used and widely available face detection algorithms: Viola- Jones, Kienzle et al. and Nilsson et al. The main objective of this work is to identify whether one of these systems is suitable for use in a vehicle environment with variable and mostly non-uniform illumination conditions, and whether any one face detection system can be sufficient for seat occupancy detection. The evaluation of detection performance is based on a large database comprising 53,928 video frames containing proprietary data collected from 39 persons of both sexes and different ages and body height as well as different objects such as bags and rearward/forward facing child restraint systems.

  7. Characterizing age-related decline of recognition memory and brain activation profile in mice.

    PubMed

    Belblidia, Hassina; Leger, Marianne; Abdelmalek, Abdelouadoud; Quiedeville, Anne; Calocer, Floriane; Boulouard, Michel; Jozet-Alves, Christelle; Freret, Thomas; Schumann-Bard, Pascale

    2018-06-01

    Episodic memory decline is one of the earlier deficits occurring during normal aging in humans. The question of spatial versus non-spatial sensitivity to age-related memory decline is of importance for a full understanding of these changes. Here, we characterized the effect of normal aging on both non-spatial (object) and spatial (object location) memory performances as well as on associated neuronal activation in mice. Novel-object (NOR) and object-location (OLR) recognition tests, respectively assessing the identity and spatial features of object memory, were examined at different ages. We show that memory performances in both tests were altered by aging as early as 15 months of age: NOR memory was partially impaired whereas OLR memory was found to be fully disrupted at 15 months of age. Brain activation profiles were assessed for both tests using immunohistochemical detection of c-Fos (neuronal activation marker) in 3and 15 month-old mice. Normal performances in NOR task by 3 month-old mice were associated to an activation of the hippocampus and a trend towards an activation in the perirhinal cortex, in a way that did significantly differ with 15 month-old mice. During OLR task, brain activation took place in the hippocampus in 3 month-old but not significantly in 15 month-old mice, which were fully impaired at this task. These differential alterations of the object- and object-location recognition memory may be linked to differential alteration of the neuronal networks supporting these tasks. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Toward a Computer Vision-based Wayfinding Aid for Blind Persons to Access Unfamiliar Indoor Environments.

    PubMed

    Tian, Yingli; Yang, Xiaodong; Yi, Chucai; Arditi, Aries

    2013-04-01

    Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech.

  9. Toward a Computer Vision-based Wayfinding Aid for Blind Persons to Access Unfamiliar Indoor Environments

    PubMed Central

    Tian, YingLi; Yang, Xiaodong; Yi, Chucai; Arditi, Aries

    2012-01-01

    Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech. PMID:23630409

  10. Saliency predicts change detection in pictures of natural scenes.

    PubMed

    Wright, Michael J

    2005-01-01

    It has been proposed that the visual system encodes the salience of objects in the visual field in an explicit two-dimensional map that guides visual selective attention. Experiments were conducted to determine whether salience measurements applied to regions of pictures of outdoor scenes could predict the detection of changes in those regions. To obtain a quantitative measure of change detection, observers located changes in pairs of colour pictures presented across an interstimulus interval (ISI). Salience measurements were then obtained from different observers for image change regions using three independent methods, and all were positively correlated with change detection. Factor analysis extracted a single saliency factor that accounted for 62% of the variance contained in the four measures. Finally, estimates of the magnitude of the image change in each picture pair were obtained, using nine separate visual filters representing low-level vision features (luminance, colour, spatial frequency, orientation, edge density). None of the feature outputs was significantly associated with change detection or saliency. On the other hand it was shown that high-level (structural) properties of the changed region were related to saliency and to change detection: objects were more salient than shadows and more detectable when changed.

  11. A Context-sensitive Approach to Anonymizing Spatial Surveillance Data: Impact on Outbreak Detection

    PubMed Central

    Cassa, Christopher A.; Grannis, Shaun J.; Overhage, J. Marc; Mandl, Kenneth D.

    2006-01-01

    Objective: The use of spatially based methods and algorithms in epidemiology and surveillance presents privacy challenges for researchers and public health agencies. We describe a novel method for anonymizing individuals in public health data sets by transposing their spatial locations through a process informed by the underlying population density. Further, we measure the impact of the skew on detection of spatial clustering as measured by a spatial scanning statistic. Design: Cases were emergency department (ED) visits for respiratory illness. Baseline ED visit data were injected with artificially created clusters ranging in magnitude, shape, and location. The geocoded locations were then transformed using a de-identification algorithm that accounts for the local underlying population density. Measurements: A total of 12,600 separate weeks of case data with artificially created clusters were combined with control data and the impact on detection of spatial clustering identified by a spatial scan statistic was measured. Results: The anonymization algorithm produced an expected skew of cases that resulted in high values of data set k-anonymity. De-identification that moves points an average distance of 0.25 km lowers the spatial cluster detection sensitivity by less than 4% and lowers the detection specificity less than 1%. Conclusion: A population-density–based Gaussian spatial blurring markedly decreases the ability to identify individuals in a data set while only slightly decreasing the performance of a standardly used outbreak detection tool. These findings suggest new approaches to anonymizing data for spatial epidemiology and surveillance. PMID:16357353

  12. The Rapidly Moving Telescope: an Instrument for the Precise Study of Optical Transients

    NASA Technical Reports Server (NTRS)

    Teegarden, B. J.; Vonrosenvinge, T. T.; Cline, T. L.; Kaipa, R.

    1983-01-01

    The development of a small telescope with a very rapid pointing capability is described whose purpose is to search for and study fast optical transients that may be associated with gamma-ray bursts and other phenomena. The primary motivation for this search is the discovery of the existence of a transient optical event from the known location of a gamma-ray bursts. The telescope has the capability of rapidly acquiring any target in the night sky within 0.7 second and locating the object's position with + or - 1 arcsec accuracy. The initial detection of the event is accomplished by the MIT explosive transient camera or ETC. This provides rough pointing coordinates to the RMT on the average within approximately 1 second after the detection of the event.

  13. Stand-off thermal IR minefield survey: system concept and experimental results

    NASA Astrophysics Data System (ADS)

    Cremer, Frank; Nguyen, Thanh T.; Yang, Lixin; Sahli, Hichem

    2005-06-01

    A detailed description of the CLEARFAST system for thermal IR stand-off minefield survey is given. The system allows (i) a stand-off diurnal observation of hazardous area, (ii) detecting anomalies, i.e. locating and searching for targets which are thermally and spectrally distinct from their surroundings, (iii) estimating the physical parameters, i.e. depth and thermal diffusivity, of the detected anomalies, and (iv) providing panoramic (mosaic) images indicating the locations of suspect objects and known markers. The CLEARFAST demonstrator has been successfully deployed and operated, in November 2004, in a real minefield within the United Nations Buffer Zone in Cyprus. The paper describes the main principles of the system and illustrates the processing chain on a set of real minefield images, together with qualitative and quantitative results.

  14. Robust multiperson tracking from a mobile platform.

    PubMed

    Ess, Andreas; Leibe, Bastian; Schindler, Konrad; van Gool, Luc

    2009-10-01

    In this paper, we address the problem of multiperson tracking in busy pedestrian zones using a stereo rig mounted on a mobile platform. The complexity of the problem calls for an integrated solution that extracts as much visual information as possible and combines it through cognitive feedback cycles. We propose such an approach, which jointly estimates camera position, stereo depth, object detection, and tracking. The interplay between those components is represented by a graphical model. Since the model has to incorporate object-object interactions and temporal links to past frames, direct inference is intractable. We, therefore, propose a two-stage procedure: for each frame, we first solve a simplified version of the model (disregarding interactions and temporal continuity) to estimate the scene geometry and an overcomplete set of object detections. Conditioned on these results, we then address object interactions, tracking, and prediction in a second step. The approach is experimentally evaluated on several long and difficult video sequences from busy inner-city locations. Our results show that the proposed integration makes it possible to deliver robust tracking performance in scenes of realistic complexity.

  15. Growth in the Number of SSN Tracked Orbital Objects

    NASA Technical Reports Server (NTRS)

    Stansbery, Eugene G.

    2004-01-01

    The number of objects in earth orbit tracked by the US Space Surveillance Network (SSN) has experienced unprecedented growth since March, 2003. Approximately 2000 orbiting objects have been added to the "Analyst list" of tracked objects. This growth is primarily due to the resumption of full power/full time operation of the AN/FPS-108 Cobra Dane radar located on Shemya Island, AK. Cobra Dane is an L-band (23-cm wavelength) phased array radar which first became operational in 1977. Cobra Dane was a "Collateral Sensor" in the SSN until 1994 when its communication link with the Space Control Center (SCC) was closed. NASA and the Air Force conducted tests in 1999 using Cobra Dane to detect and track small debris. These tests confirmed that the radar was capable of detecting and maintaining orbits on objects as small as 5-cm diameter. Subsequently, Cobra Dane was reconnected to the SSN and resumed full power/full time space surveillance operations on March 4, 2003. This paper will examine the new data and its implications to the understanding of the orbital debris environment and orbital safety.

  16. The Infrared-Optical Telescope (IRT) of the Exist Observatory

    NASA Technical Reports Server (NTRS)

    Kutyrev, Alexander; Bloom, Joshua; Gehrels, Neil; Golisano, Craig; Gong, Quan; Grindlay, Jonathan; Moseley, Samuel; Woodgate, Bruce

    2010-01-01

    The IRT is a 1.1m visible and infrared passively cooled telescope, which can locate, identify and obtain spectra of GRB afterglows at redshifts up to z 20. It will also acquire optical-IR, imaging and spectroscopy of AGN and transients discovered by the EXIST (The Energetic X-ray Imaging Survey Telescope). The IRT imaging and spectroscopic capabilities cover a broad spectral range from 0.32.2m in four bands. The identical fields of view in the four instrument bands are each split in three subfields: imaging, objective prism slitless for the field and objective prism single object slit low resolution spectroscopy, and high resolution long slit on single object. This allows the instrument, to do simultaneous broadband photometry or spectroscopy of the same object over the full spectral range, thus greatly improving the efficiency of the observatory and its detection limits. A prompt follow up (within three minutes) of the transient discovered by the EXIST makes IRT a unique tool for detection and study of these events, which is particularly valuable at wavelengths unavailable to the ground based observatories.

  17. MUSIC algorithm DoA estimation for cooperative node location in mobile ad hoc networks

    NASA Astrophysics Data System (ADS)

    Warty, Chirag; Yu, Richard Wai; ElMahgoub, Khaled; Spinsante, Susanna

    In recent years the technological development has encouraged several applications based on distributed communications network without any fixed infrastructure. The problem of providing a collaborative early warning system for multiple mobile nodes against a fast moving object. The solution is provided subject to system level constraints: motion of nodes, antenna sensitivity and Doppler effect at 2.4 GHz and 5.8 GHz. This approach consists of three stages. The first phase consists of detecting the incoming object using a highly directive two element antenna at 5.0 GHz band. The second phase consists of broadcasting the warning message using a low directivity broad antenna beam using 2× 2 antenna array which then in third phase will be detected by receiving nodes by using direction of arrival (DOA) estimation technique. The DOA estimation technique is used to estimate the range and bearing of the incoming nodes. The position of fast arriving object can be estimated using the MUSIC algorithm for warning beam DOA estimation. This paper is mainly intended to demonstrate the feasibility of early detection and warning system using a collaborative node to node communication links. The simulation is performed to show the behavior of detecting and broadcasting antennas as well as performance of the detection algorithm. The idea can be further expanded to implement commercial grade detection and warning system

  18. Moiré deflectometry-based position detection for optical tweezers.

    PubMed

    Khorshad, Ali Akbar; Reihani, S Nader S; Tavassoly, Mohammad Taghi

    2017-09-01

    Optical tweezers have proven to be indispensable tools for pico-Newton range force spectroscopy. A quadrant photodiode (QPD) positioned at the back focal plane of an optical tweezers' condenser is commonly used for locating the trapped object. In this Letter, for the first time, to the best of our knowledge, we introduce a moiré pattern-based detection method for optical tweezers. We show, both theoretically and experimentally, that this detection method could provide considerably better position sensitivity compared to the commonly used detection systems. For instance, position sensitivity for a trapped 2.17 μm polystyrene bead is shown to be 71% better than the commonly used QPD-based detection method. Our theoretical and experimental results are in good agreement.

  19. A comparison of earthquake backprojection imaging methods for dense local arrays

    NASA Astrophysics Data System (ADS)

    Beskardes, G. D.; Hole, J. A.; Wang, K.; Michaelides, M.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Brown, L. D.; Quiros, D. A.

    2018-03-01

    Backprojection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. While backprojection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed and simplified to overcome imaging challenges. Real data issues include aliased station spacing, inadequate array aperture, inaccurate velocity model, low signal-to-noise ratio, large noise bursts and varying waveform polarity. We compare the performance of backprojection with four previously used data pre-processing methods: raw waveform, envelope, short-term averaging/long-term averaging and kurtosis. Our primary goal is to detect and locate events smaller than noise by stacking prior to detection to improve the signal-to-noise ratio. The objective is to identify an optimized strategy for automated imaging that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the source images, preserves magnitude, and considers computational cost. Imaging method performance is assessed using a real aftershock data set recorded by the dense AIDA array following the 2011 Virginia earthquake. Our comparisons show that raw-waveform backprojection provides the best spatial resolution, preserves magnitude and boosts signal to detect events smaller than noise, but is most sensitive to velocity error, polarity error and noise bursts. On the other hand, the other methods avoid polarity error and reduce sensitivity to velocity error, but sacrifice spatial resolution and cannot effectively reduce noise by stacking. Of these, only kurtosis is insensitive to large noise bursts while being as efficient as the raw-waveform method to lower the detection threshold; however, it does not preserve the magnitude information. For automatic detection and location of events in a large data set, we therefore recommend backprojecting kurtosis waveforms, followed by a second pass on the detected events using noise-filtered raw waveforms to achieve the best of all criteria.

  20. Mapping Diffuse Seismicity Using Empirical Matched Field Processing Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J; Templeton, D C; Harris, D B

    The objective of this project is to detect and locate more microearthquakes using the empirical matched field processing (MFP) method than can be detected using only conventional earthquake detection techniques. We propose that empirical MFP can complement existing catalogs and techniques. We test our method on continuous seismic data collected at the Salton Sea Geothermal Field during November 2009 and January 2010. In the Southern California Earthquake Data Center (SCEDC) earthquake catalog, 619 events were identified in our study area during this time frame and our MFP technique identified 1094 events. Therefore, we believe that the empirical MFP method combinedmore » with conventional methods significantly improves the network detection ability in an efficient matter.« less

  1. Study of moving object detecting and tracking algorithm for video surveillance system

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhang, Rongfu

    2010-10-01

    This paper describes a specific process of moving target detecting and tracking in the video surveillance.Obtain high-quality background is the key to achieving differential target detecting in the video surveillance.The paper is based on a block segmentation method to build clear background,and using the method of background difference to detecing moving target,after a series of treatment we can be extracted the more comprehensive object from original image,then using the smallest bounding rectangle to locate the object.In the video surveillance system, the delay of camera and other reasons lead to tracking lag,the model of Kalman filter based on template matching was proposed,using deduced and estimated capacity of Kalman,the center of smallest bounding rectangle for predictive value,predicted the position in the next moment may appare,followed by template matching in the region as the center of this position,by calculate the cross-correlation similarity of current image and reference image,can determine the best matching center.As narrowed the scope of searching,thereby reduced the searching time,so there be achieve fast-tracking.

  2. Spectral saliency via automatic adaptive amplitude spectrum analysis

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Dai, Jialun; Zhu, Yafei; Zheng, Haiyong; Qiao, Xiaoyan

    2016-03-01

    Suppressing nonsalient patterns by smoothing the amplitude spectrum at an appropriate scale has been shown to effectively detect the visual saliency in the frequency domain. Different filter scales are required for different types of salient objects. We observe that the optimal scale for smoothing amplitude spectrum shares a specific relation with the size of the salient region. Based on this observation and the bottom-up saliency detection characterized by spectrum scale-space analysis for natural images, we propose to detect visual saliency, especially with salient objects of different sizes and locations via automatic adaptive amplitude spectrum analysis. We not only provide a new criterion for automatic optimal scale selection but also reserve the saliency maps corresponding to different salient objects with meaningful saliency information by adaptive weighted combination. The performance of quantitative and qualitative comparisons is evaluated by three different kinds of metrics on the four most widely used datasets and one up-to-date large-scale dataset. The experimental results validate that our method outperforms the existing state-of-the-art saliency models for predicting human eye fixations in terms of accuracy and robustness.

  3. Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection

    PubMed Central

    Lupyan, Gary; Spivey, Michael J.

    2010-01-01

    Background Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Methodology/Principal Findings Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Conclusions/Significance Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception. PMID:20628646

  4. Gauging the Potential of Socially Critical Environmental Education (EE): Examining Local Environmental Problems through Children's Perspective

    ERIC Educational Resources Information Center

    Tsoubaris, Dimitris; Georgopoulos, Aleksandros

    2013-01-01

    The objective of this qualitative research work is to detect the needs, aspirations and feelings of pupils experiencing local environmental problems and elaborate them through the prism of a socially critical educational approach. Semi-structured focus group interviews are used as a research method applied to four primary schools located near…

  5. Reduced-Order Modeling and Wavelet Analysis of Turbofan Engine Structural Response Due to Foreign Object Damage (FOD) Events

    NASA Technical Reports Server (NTRS)

    Turso, James; Lawrence, Charles; Litt, Jonathan

    2004-01-01

    The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.

  6. Reduced-Order Modeling and Wavelet Analysis of Turbofan Engine Structural Response Due to Foreign Object Damage "FOD" Events

    NASA Technical Reports Server (NTRS)

    Turso, James A.; Lawrence, Charles; Litt, Jonathan S.

    2007-01-01

    The development of a wavelet-based feature extraction technique specifically targeting FOD-event induced vibration signal changes in gas turbine engines is described. The technique performs wavelet analysis of accelerometer signals from specified locations on the engine and is shown to be robust in the presence of significant process and sensor noise. It is envisioned that the technique will be combined with Kalman filter thermal/ health parameter estimation for FOD-event detection via information fusion from these (and perhaps other) sources. Due to the lack of high-frequency FOD-event test data in the open literature, a reduced-order turbofan structural model (ROM) was synthesized from a finite-element model modal analysis to support the investigation. In addition to providing test data for algorithm development, the ROM is used to determine the optimal sensor location for FOD-event detection. In the presence of significant noise, precise location of the FOD event in time was obtained using the developed wavelet-based feature.

  7. IMPROVED CAPABILITIES FOR SITING WIND FARMS AND MITIGATING IMPACTS ON RADAR OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiswell, S.

    2010-01-15

    The development of efficient wind energy production involves challenges in technology and interoperability with other systems critical to the national mission. Wind turbines impact radar measurements as a result of their large reflectivity cross section as well as through the Doppler phase shift of their rotating blades. Wind farms can interfere with operational radar in multiple contexts, with degradation impacts on: weather detection such as tornado location, wind shear, and precipitation monitoring; tracking of airplanes where air traffic control software can lose the tracks of aircraft; and in identification of other low flying targets where a wind farm located closemore » to a border might create a dead zone for detecting intruding objects. Objects in the path of an electromagnetic wave affect its propagation characteristics. This includes actual blockage of wave propagation by large individual objects and interference in wave continuity due to diffraction of the beam by individual or multiple objects. As an evolving industry, and the fastest growing segment of the energy sector, wind power is poised to make significant contributions in future energy generation requirements. The ability to develop comprehensive strategies for designing wind turbine locations that are mutually beneficial to both the wind industry that is dependent on production, and radar sites which the nation relies on, is critical to establishing reliable and secure wind energy. The mission needs of the Department of Homeland Security (DHS), Department of Defense (DOD), Federal Aviation Administration (FAA), and National Oceanographic and Atmospheric Administration (NOAA) dictate that the nation's radar systems remain uninhibited, to the maximum extent possible, by man-made obstructions; however, wind turbines can and do impact the surveillance footprint for monitoring airspace both for national defense as well as critical weather conditions which can impact life and property. As a result, a number of potential wind power locations have been contested on the basis of radar line of site. Radar line of site is dependent on local topography, and varies with atmospheric refractive index which is affected by weather and geographic conditions.« less

  8. Blindness to a simultaneous change of all elements in a scene, unless there is a change in summary statistics.

    PubMed

    Saiki, Jun; Holcombe, Alex O

    2012-03-06

    Sudden change of every object in a display is typically conspicuous. We find however that in the presence of a secondary task, with a display of moving dots, it can be difficult to detect a sudden change in color of all the dots. A field of 200 dots, half red and half green, half moving rightward and half moving leftward, gave the appearance of two surfaces. When all 200 dots simultaneously switched color between red and green, performance in detecting the switch was very poor. A key display characteristic was that the color proportions on each surface (summary statistics) were not affected by the color switch. When the color switch is accompanied by a change in these summary statistics, people perform well in detecting the switch, suggesting that the secondary task does not disrupt the availability of this statistical information. These findings suggest that when the change is missed, the old and new colors were represented, but the color-location pattern (binding of colors to locations) was not represented or not compared. Even after extended viewing, changes to the individual color-location pattern are not available, suggesting that the feeling of seeing these details is misleading.

  9. The robot's eyes - Stereo vision system for automated scene analysis

    NASA Technical Reports Server (NTRS)

    Williams, D. S.

    1977-01-01

    Attention is given to the robot stereo vision system which maintains the image produced by solid-state detector television cameras in a dynamic random access memory called RAPID. The imaging hardware consists of sensors (two solid-state image arrays using a charge injection technique), a video-rate analog-to-digital converter, the RAPID memory, and various types of computer-controlled displays, and preprocessing equipment (for reflexive actions, processing aids, and object detection). The software is aimed at locating objects and transversibility. An object-tracking algorithm is discussed and it is noted that tracking speed is in the 50-75 pixels/s range.

  10. Object, spatial and social recognition testing in a single test paradigm.

    PubMed

    Lian, Bin; Gao, Jun; Sui, Nan; Feng, Tingyong; Li, Ming

    2018-07-01

    Animals have the ability to process information about an object or a conspecific's physical features and location, and alter its behavior when such information is updated. In the laboratory, the object, spatial and social recognition are often studied in separate tasks, making them unsuitable to study the potential dissociations and interactions among various types of recognition memories. The present study introduced a single paradigm to detect the object and spatial recognition, and social recognition of a familiar and novel conspecific. Specifically, male and female Sprague-Dawley adult (>75 days old) or preadolescent (25-28 days old) rats were tested with two objects and one social partner in an open-field arena for four 10-min sessions with a 20-min inter-session interval. After the first sample session, a new object replaced one of the sampled objects in the second session, and the location of one of the old objects was changed in the third session. Finally, a new social partner was introduced in the fourth session and replaced the familiar one. Exploration time with each stimulus was recorded and measures for the three recognitions were calculated based on the discrimination ratio. Overall results show that adult and preadolescent male and female rats spent more time exploring the social partner than the objects, showing a clear preference for social stimulus over nonsocial one. They also did not differ in their abilities to discriminate a new object, a new location and a new social partner from a familiar one, and to recognize a familiar conspecific. Acute administration of MK-801 (a NMDA receptor antagonist, 0.025 and 0.10 mg/kg, i.p.) after the sample session dose-dependently reduced the total time spent on exploring the social partner and objects in the adult rats, and had a significantly larger effect in the females than in the males. MK-801 also dose-dependently increased motor activity. However, it did not alter the object, spatial and social recognitions. These findings indicate that the new triple recognition paradigm is capable of recording the object, spatial location and social recognition together and revealing potential sex and age differences. This paradigm is also useful for the study of object and social exploration concurrently and can be used to evaluate cognition-altering drugs in various stages of recognition memories. Copyright © 2018. Published by Elsevier Inc.

  11. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments.

    PubMed

    Ramon Soria, Pablo; Arrue, Begoña C; Ollero, Anibal

    2017-01-07

    The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors.

  12. Proto-object categorisation and local gist vision using low-level spatial features.

    PubMed

    Martins, Jaime A; Rodrigues, J M F; du Buf, J M H

    2015-09-01

    Object categorisation is a research area with significant challenges, especially in conditions with bad lighting, occlusions, different poses and similar objects. This makes systems that rely on precise information unable to perform efficiently, like a robotic arm that needs to know which objects it can reach. We propose a biologically inspired object detection and categorisation framework that relies on robust low-level object shape. Using only edge conspicuity and disparity features for scene figure-ground segregation and object categorisation, a trained neural network classifier can quickly categorise broad object families and consequently bootstrap a low-level scene gist system. We argue that similar processing is possibly located in the parietal pathway leading to the LIP cortex and, via areas V5/MT and MST, providing useful information to the superior colliculus for eye and head control. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Is that a belt or a snake? object attentional selection affects the early stages of visual sensory processing

    PubMed Central

    2012-01-01

    Background There is at present crescent empirical evidence deriving from different lines of ERPs research that, unlike previously observed, the earliest sensory visual response, known as C1 component or P/N80, generated within the striate cortex, might be modulated by selective attention to visual stimulus features. Up to now, evidence of this modulation has been related to space location, and simple features such as spatial frequency, luminance, and texture. Additionally, neurophysiological conditions, such as emotion, vigilance, the reflexive or voluntary nature of input attentional selection, and workload have also been related to C1 modulations, although at least the workload status has received controversial indications. No information is instead available, at present, for objects attentional selection. Methods In this study object- and space-based attention mechanisms were conjointly investigated by presenting complex, familiar shapes of artefacts and animals, intermixed with distracters, in different tasks requiring the selection of a relevant target-category within a relevant spatial location, while ignoring the other shape categories within this location, and, overall, all the categories at an irrelevant location. EEG was recorded from 30 scalp electrode sites in 21 right-handed participants. Results and Conclusions ERP findings showed that visual processing was modulated by both shape- and location-relevance per se, beginning separately at the latency of the early phase of a precocious negativity (60-80 ms) at mesial scalp sites consistent with the C1 component, and a positivity at more lateral sites. The data also showed that the attentional modulation progressed conjointly at the latency of the subsequent P1 (100-120 ms) and N1 (120-180 ms), as well as later-latency components. These findings support the views that (1) V1 may be precociously modulated by direct top-down influences, and participates to object, besides simple features, attentional selection; (2) object spatial and non-spatial features selection might begin with an early, parallel detection of a target object in the visual field, followed by the progressive focusing of spatial attention onto the location of an actual target for its identification, somehow in line with neural mechanisms reported in the literature as "object-based space selection", or with those proposed for visual search. PMID:22300540

  14. Study of optical microvariability in the blazar 1ES1011+496

    NASA Astrophysics Data System (ADS)

    Sosa, M. S.; von Essen, C.; Cellone, S. A.; Andruchow, I.; Schmitt, J. H. M. M.

    We carried out a study of photometric variability of the blazar 1ES1011+496 using the 1.20 m Oskar Lühning telescope located at Ham- burger Sternwarte Institute, Germany. This object has been detected at hight energies ( 200 GeV), so it is of interest to characterize its behavior in the optical range. We obtained the light curves in B, V and R bands through dif- ferential photometry, with a time resolution of 15 minutes over 8 nights. We did not detect inter-night variability, but we detected a marginally sig- nificant variability in temporal scales of a few days.

  15. The use of geoscience methods for terrestrial forensic searches

    NASA Astrophysics Data System (ADS)

    Pringle, J. K.; Ruffell, A.; Jervis, J. R.; Donnelly, L.; McKinley, J.; Hansen, J.; Morgan, R.; Pirrie, D.; Harrison, M.

    2012-08-01

    Geoscience methods are increasingly being utilised in criminal, environmental and humanitarian forensic investigations, and the use of such methods is supported by a growing body of experimental and theoretical research. Geoscience search techniques can complement traditional methodologies in the search for buried objects, including clandestine graves, weapons, explosives, drugs, illegal weapons, hazardous waste and vehicles. This paper details recent advances in search and detection methods, with case studies and reviews. Relevant examples are given, together with a generalised workflow for search and suggested detection technique(s) table. Forensic geoscience techniques are continuing to rapidly evolve to assist search investigators to detect hitherto difficult to locate forensic targets.

  16. Different effects of color-based and location-based selection on visual working memory.

    PubMed

    Li, Qi; Saiki, Jun

    2015-02-01

    In the present study, we investigated how feature- and location-based selection influences visual working memory (VWM) encoding and maintenance. In Experiment 1, cue type (color, location) and cue timing (precue, retro-cue) were manipulated in a change detection task. The stimuli were color-location conjunction objects, and binding memory was tested. We found a significantly greater effect for color precues than for either color retro-cues or location precues, but no difference between location pre- and retro-cues, consistent with previous studies (e.g., Griffin & Nobre in Journal of Cognitive Neuroscience, 15, 1176-1194, 2003). We also found no difference between location and color retro-cues. Experiment 2 replicated the color precue advantage with more complex color-shape-location conjunction objects. Only one retro-cue effect was different from that in Experiment 1: Color retro-cues were significantly less effective than location retro-cues in Experiment 2, which may relate to a structural property of multidimensional VWM representations. In Experiment 3, a visual search task was used, and the result of a greater location than color precue effect suggests that the color precue advantage in a memory task is related to the modulation of VWM encoding rather than of sensation and perception. Experiment 4, using a task that required only memory for individual features but not for feature bindings, further confirmed that the color precue advantage is specific to binding memory. Together, these findings reveal new aspects of the interaction between attention and VWM and provide potentially important implications for the structural properties of VWM representations.

  17. Chemical Sensing for Buried Landmines - Fundamental Processes Influencing Trace Chemical Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    PHELAN, JAMES M.

    2002-05-01

    Mine detection dogs have a demonstrated capability to locate hidden objects by trace chemical detection. Because of this capability, demining activities frequently employ mine detection dogs to locate individual buried landmines or for area reduction. The conditions appropriate for use of mine detection dogs are only beginning to emerge through diligent research that combines dog selection/training, the environmental conditions that impact landmine signature chemical vapors, and vapor sensing performance capability and reliability. This report seeks to address the fundamental soil-chemical interactions, driven by local weather history, that influence the availability of chemical for trace chemical detection. The processes evaluated include:more » landmine chemical emissions to the soil, chemical distribution in soils, chemical degradation in soils, and weather and chemical transport in soils. Simulation modeling is presented as a method to evaluate the complex interdependencies among these various processes and to establish conditions appropriate for trace chemical detection. Results from chemical analyses on soil samples obtained adjacent to landmines are presented and demonstrate the ultra-trace nature of these residues. Lastly, initial measurements of the vapor sensing performance of mine detection dogs demonstrates the extreme sensitivity of dogs in sensing landmine signature chemicals; however, reliability at these ultra-trace vapor concentrations still needs to be determined. Through this compilation, additional work is suggested that will fill in data gaps to improve the utility of trace chemical detection.« less

  18. Water Detection Based on Object Reflections

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo L.; Matthies, Larry H.

    2012-01-01

    Water bodies are challenging terrain hazards for terrestrial unmanned ground vehicles (UGVs) for several reasons. Traversing through deep water bodies could cause costly damage to the electronics of UGVs. Additionally, a UGV that is either broken down due to water damage or becomes stuck in a water body during an autonomous operation will require rescue, potentially drawing critical resources away from the primary operation and increasing the operation cost. Thus, robust water detection is a critical perception requirement for UGV autonomous navigation. One of the properties useful for detecting still water bodies is that their surface acts as a horizontal mirror at high incidence angles. Still water bodies in wide-open areas can be detected by geometrically locating the exact pixels in the sky that are reflecting on candidate water pixels on the ground, predicting if ground pixels are water based on color similarity to the sky and local terrain features. But in cluttered areas where reflections of objects in the background dominate the appearance of the surface of still water bodies, detection based on sky reflections is of marginal value. Specifically, this software attempts to solve the problem of detecting still water bodies on cross-country terrain in cluttered areas at low cost.

  19. A Calibrated H-alpha Index to Monitor Emission Line Objects

    NASA Astrophysics Data System (ADS)

    Hintz, Eric G.; Joner, M. D.

    2013-06-01

    Over an 8 year period we have developed a calibrated H-alpha index, similar to the more traditional H-beta index, based on spectrophotometric observations (Joner & Hintz, 2013) from the DAO 1.2-m Telescope. While developing the calibration for this filter set we also obtained spectra of a number of emission line systems such as high mass x-ray binaries (HMXB), Be stars, and young stellar objects. From this work we find that the main sequence stars fill a very tight relation in the H-alpha/H-beta plane and that the emission line objects are easily detected. We will present the overall location of these emission line objects. We will also present the changes experiences by these objects over the course of the years of the project.

  20. Multidisciplinary unmanned technology teammate (MUTT)

    NASA Astrophysics Data System (ADS)

    Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark

    2013-01-01

    The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.

  1. Micro-crack detection in CFRP laminates using coda wave NDE

    NASA Astrophysics Data System (ADS)

    Dayal, Vinay; Barnard, Dan; Livings, Richard

    2018-04-01

    Coda Waves or diffuse field has been touted to be an NDE method that does not require the damage to be in the path of the ultrasound. The object is insonified with ultrasound and instead of catching the first or second arrival, the waves are allowed to bounce multiple times. This aspect is very important in structural health monitoring (SHM) where the potential damage development location is unknown. Researchers have used Coda waves in the interrogation of seismic damage and metallic materials. In this work we have applied the technique to composite material, and present the results herein. The coda wave and acoustic emission signals are recorded simultaneously and corroborated. Development of small incipient damage in the form of micro-crack and their detection is the objective of this work.

  2. Leakage detection in galvanized iron pipelines using ensemble empirical mode decomposition analysis

    NASA Astrophysics Data System (ADS)

    Amin, Makeen; Ghazali, M. Fairusham

    2015-05-01

    There are many numbers of possible approaches to detect leaks. Some leaks are simply noticeable when the liquids or water appears on the surface. However many leaks do not find their way to the surface and the existence has to be check by analysis of fluid flow in the pipeline. The first step is to determine the approximate position of leak. This can be done by isolate the sections of the mains in turn and noting which section causes a drop in the flow. Next approach is by using sensor to locate leaks. This approach are involves strain gauge pressure transducers and piezoelectric sensor. the occurrence of leaks and know its exact location in the pipeline by using specific method which are Acoustic leak detection method and transient method. The objective is to utilize the signal processing technique in order to analyse leaking in the pipeline. With this, an EEMD method will be applied as the analysis method to collect and analyse the data.

  3. Virtual landmarks

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Bai, Peirui; Torigian, Drew A.

    2017-03-01

    Much has been published on finding landmarks on object surfaces in the context of shape modeling. While this is still an open problem, many of the challenges of past approaches can be overcome by removing the restriction that landmarks must be on the object surface. The virtual landmarks we propose may reside inside, on the boundary of, or outside the object and are tethered to the object. Our solution is straightforward, simple, and recursive in nature, proceeding from global features initially to local features in later levels to detect landmarks. Principal component analysis (PCA) is used as an engine to recursively subdivide the object region. The object itself may be represented in binary or fuzzy form or with gray values. The method is illustrated in 3D space (although it generalizes readily to spaces of any dimensionality) on four objects (liver, trachea and bronchi, and outer boundaries of left and right lungs along pleura) derived from 5 patient computed tomography (CT) image data sets of the thorax and abdomen. The virtual landmark identification approach seems to work well on different structures in different subjects and seems to detect landmarks that are homologously located in different samples of the same object. The approach guarantees that virtual landmarks are invariant to translation, scaling, and rotation of the object/image. Landmarking techniques are fundamental for many computer vision and image processing applications, and we are currently exploring the use virtual landmarks in automatic anatomy recognition and object analytics.

  4. Intelligent hypertext manual development for the Space Shuttle hazardous gas detection system

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.; Hoyt, W. Andes

    1989-01-01

    This research is designed to utilize artificial intelligence (AI) technology to increase the efficiency of personnel involved with monitoring the space shuttle hazardous gas detection systems at the Marshall Space Flight Center. The objective is to create a computerized service manual in the form of a hypertext and expert system which stores experts' knowledge and experience. The resulting Intelligent Manual will assist the user in interpreting data timely, in identifying possible faults, in locating the applicable documentation efficiently, in training inexperienced personnel effectively, and updating the manual frequently as required.

  5. Hypervelocity Impact (HVI). Volume 7; WLE High Fidelity Specimen RCC16R

    NASA Technical Reports Server (NTRS)

    Gorman, Michael R.; Ziola, Steven M.

    2007-01-01

    During 2003 and 2004, the Johnson Space Center's White Sands Testing Facility in Las Cruces, New Mexico conducted hypervelocity impact tests on the space shuttle wing leading edge. Hypervelocity impact tests were conducted to determine if Micro-Meteoroid/Orbital Debris impacts could be reliably detected and located using simple passive ultrasonic methods. The objective of Target RCC16R was to study hypervelocity impacts through the reinforced carbon-carbon (RCC) panels of the Wing Leading Edge. Impact damage was detected using lightweight, low power instrumentation capable of being used in flight.

  6. Finding and recognizing objects in natural scenes: complementary computations in the dorsal and ventral visual systems

    PubMed Central

    Rolls, Edmund T.; Webb, Tristan J.

    2014-01-01

    Searching for and recognizing objects in complex natural scenes is implemented by multiple saccades until the eyes reach within the reduced receptive field sizes of inferior temporal cortex (IT) neurons. We analyze and model how the dorsal and ventral visual streams both contribute to this. Saliency detection in the dorsal visual system including area LIP is modeled by graph-based visual saliency, and allows the eyes to fixate potential objects within several degrees. Visual information at the fixated location subtending approximately 9° corresponding to the receptive fields of IT neurons is then passed through a four layer hierarchical model of the ventral cortical visual system, VisNet. We show that VisNet can be trained using a synaptic modification rule with a short-term memory trace of recent neuronal activity to capture both the required view and translation invariances to allow in the model approximately 90% correct object recognition for 4 objects shown in any view across a range of 135° anywhere in a scene. The model was able to generalize correctly within the four trained views and the 25 trained translations. This approach analyses the principles by which complementary computations in the dorsal and ventral visual cortical streams enable objects to be located and recognized in complex natural scenes. PMID:25161619

  7. Forgetting What Was Where: The Fragility of Object-Location Binding

    PubMed Central

    Pertzov, Yoni; Dong, Mia Yuan; Peich, Muy-Cheng; Husain, Masud

    2012-01-01

    Although we frequently take advantage of memory for objects locations in everyday life, understanding how an object’s identity is bound correctly to its location remains unclear. Here we examine how information about object identity, location and crucially object-location associations are differentially susceptible to forgetting, over variable retention intervals and memory load. In our task, participants relocated objects to their remembered locations using a touchscreen. When participants mislocalized objects, their reports were clustered around the locations of other objects in the array, rather than occurring randomly. These ‘swap’ errors could not be attributed to simple failure to remember either the identity or location of the objects, but rather appeared to arise from failure to bind object identity and location in memory. Moreover, such binding failures significantly contributed to decline in localization performance over retention time. We conclude that when objects are forgotten they do not disappear completely from memory, but rather it is the links between identity and location that are prone to be broken over time. PMID:23118956

  8. Design and Implementation of a C++ Multithreaded Operational Tool for the Generation of Detection Time Grids in 2D for P- and S-waves taking into Consideration Seismic Network Topology and Data Latency

    NASA Astrophysics Data System (ADS)

    Sardina, V.

    2017-12-01

    The Pacific Tsunami Warning Center's round the clock operations rely on the rapid determination of the source parameters of earthquakes occurring around the world. To rapidly estimate source parameters such as earthquake location and magnitude the PTWC analyzes data streams ingested in near-real time from a global network of more than 700 seismic stations. Both the density of this network and the data latency of its member stations at any given time have a direct impact on the speed at which the PTWC scientists on duty can locate an earthquake and estimate its magnitude. In this context, it turns operationally advantageous to have the ability of assessing how quickly the PTWC operational system can reasonably detect and locate and earthquake, estimate its magnitude, and send the corresponding tsunami message whenever appropriate. For this purpose, we designed and implemented a multithreaded C++ software package to generate detection time grids for both P- and S-waves after taking into consideration the seismic network topology and the data latency of its member stations. We first encapsulate all the parameters of interest at a given geographic point, such as geographic coordinates, P- and S-waves detection time in at least a minimum number of stations, and maximum allowed azimuth gap into a DetectionTimePoint class. Then we apply composition and inheritance to define a DetectionTimeLine class that handles a vector of DetectionTimePoint objects along a given latitude. A DetectionTimesGrid class in turn handles the dynamic allocation of new TravelTimeLine objects and assigning the calculation of the corresponding P- and S-waves' detection times to new threads. Finally, we added a GUI that allows the user to interactively set all initial calculation parameters and output options. Initial testing in an eight core system shows that generation of a global 2D grid at 1 degree resolution setting detection on at least 5 stations and no azimuth gap restriction takes under 25 seconds. Under the same initial conditions, generation of a 2D grid at 0.1 degree resolution (2.6 million grid points) takes no more than 22 minutes. This preliminary results show a significant gain in grid generation speed when compared to other implementation via either scripts, or previous versions of the C++ code that did not implement multithreading.

  9. Sex and spatial position effects on object location memory following intentional learning of object identities.

    PubMed

    Alexander, Gerianne M; Packard, Mark G; Peterson, Bradley S

    2002-01-01

    Memory for object location relative both to veridical center (left versus right visual hemispace) and to eccentricity (central versus peripheral objects) was measured in 26 males and 25 females using the Silverman and Eals Location Memory Task. A subset of participants (17 males and 13 females) also completed a measure of implicit learning, the mirror-tracing task. No sex differences were observed in memory for object identities. Further, in both sexes, memory for object locations was better for peripherally located objects than for centrally located objects. In contrast to these similarities in female and male task performance, females but not males showed better recovery of object locations in the right compared to the left visual hemispace. Moreover, memory for object locations in the right hemispace was associated with mirror-tracing performance in women but not in men. Together, these data suggest that the processing of object features and object identification in the left cerebral hemisphere may include processing of spatial information that may contribute to superior object location memory in females relative to males.

  10. Detection and mapping of mountain pine beetle red attack: Matching information needs with appropriate remotely sensed data

    Treesearch

    M. A. Wulder; J. C. White; B. J. Bentz

    2005-01-01

    Estimates of the location and extent of the red attack stage of mountain pine beetle (Dentroctonus ponderosae Hopkins) infestations are critical for forest management. The degree of spatial and temporal precision required for these estimates varies according to the management objectives and the nature of the infestation. This paper outlines a hierarchy of information...

  11. Survey of Collision Avoidance and Ranging Sensors for Mobile Robots. Revision 1

    DTIC Science & Technology

    1992-12-01

    diagram of the Hamamatsu’s Range-Finder Chip Set, which applies the principle of triangulation (Hamamatsu Corporation, 1990) ....................... 37...platform (Courtesy Transitions Research Company ) . ............................................ 68 37. The Sensus 300 configured for 360-degree coverage... applied to the detection of metal objects located at short-range. Typical inductive sensors generate an oscillatory radio-frequency (RF) field around a

  12. Tracking Objects with Networked Scattered Directional Sensors

    NASA Astrophysics Data System (ADS)

    Plarre, Kurt; Kumar, P. R.

    2007-12-01

    We study the problem of object tracking using highly directional sensors—sensors whose field of vision is a line or a line segment. A network of such sensors monitors a certain region of the plane. Sporadically, objects moving in straight lines and at a constant speed cross the region. A sensor detects an object when it crosses its line of sight, and records the time of the detection. No distance or angle measurements are available. The task of the sensors is to estimate the directions and speeds of the objects, and the sensor lines, which are unknown a priori. This estimation problem involves the minimization of a highly nonconvex cost function. To overcome this difficulty, we introduce an algorithm, which we call "adaptive basis algorithm." This algorithm is divided into three phases: in the first phase, the algorithm is initialized using data from six sensors and four objects; in the second phase, the estimates are updated as data from more sensors and objects are incorporated. The third phase is an optional coordinated transformation. The estimation is done in an "ad-hoc" coordinate system, which we call "adaptive coordinate system." When more information is available, for example, the location of six sensors, the estimates can be transformed to the "real-world" coordinate system. This constitutes the third phase.

  13. Multiple-camera/motion stereoscopy for range estimation in helicopter flight

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.

    1993-01-01

    Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.

  14. Acoustic Sensor Planning for Gunshot Location in National Parks: A Pareto Front Approach

    PubMed Central

    González-Castaño, Francisco Javier; Alonso, Javier Vales; Costa-Montenegro, Enrique; López-Matencio, Pablo; Vicente-Carrasco, Francisco; Parrado-García, Francisco J.; Gil-Castiñeira, Felipe; Costas-Rodríguez, Sergio

    2009-01-01

    In this paper, we propose a solution for gunshot location in national parks. In Spain there are agencies such as SEPRONA that fight against poaching with considerable success. The DiANa project, which is endorsed by Cabaneros National Park and the SEPRONA service, proposes a system to automatically detect and locate gunshots. This work presents its technical aspects related to network design and planning. The system consists of a network of acoustic sensors that locate gunshots by hyperbolic multi-lateration estimation. The differences in sound time arrivals allow the computation of a low error estimator of gunshot location. The accuracy of this method depends on tight sensor clock synchronization, which an ad-hoc time synchronization protocol provides. On the other hand, since the areas under surveillance are wide, and electric power is scarce, it is necessary to maximize detection coverage and minimize system cost at the same time. Therefore, sensor network planning has two targets, i.e., coverage and cost. We model planning as an unconstrained problem with two objective functions. We determine a set of candidate solutions of interest by combining a derivative-free descent method we have recently proposed with a Pareto front approach. The results are clearly superior to random seeding in a realistic simulation scenario. PMID:22303135

  15. Acoustic sensor planning for gunshot location in national parks: a pareto front approach.

    PubMed

    González-Castaño, Francisco Javier; Alonso, Javier Vales; Costa-Montenegro, Enrique; López-Matencio, Pablo; Vicente-Carrasco, Francisco; Parrado-García, Francisco J; Gil-Castiñeira, Felipe; Costas-Rodríguez, Sergio

    2009-01-01

    In this paper, we propose a solution for gunshot location in national parks. In Spain there are agencies such as SEPRONA that fight against poaching with considerable success. The DiANa project, which is endorsed by Cabaneros National Park and the SEPRONA service, proposes a system to automatically detect and locate gunshots. This work presents its technical aspects related to network design and planning. The system consists of a network of acoustic sensors that locate gunshots by hyperbolic multi-lateration estimation. The differences in sound time arrivals allow the computation of a low error estimator of gunshot location. The accuracy of this method depends on tight sensor clock synchronization, which an ad-hoc time synchronization protocol provides. On the other hand, since the areas under surveillance are wide, and electric power is scarce, it is necessary to maximize detection coverage and minimize system cost at the same time. Therefore, sensor network planning has two targets, i.e., coverage and cost. We model planning as an unconstrained problem with two objective functions. We determine a set of candidate solutions of interest by combining a derivative-free descent method we have recently proposed with a Pareto front approach. The results are clearly superior to random seeding in a realistic simulation scenario.

  16. Spatial autocorrelation of West Nile virus vector mosquito abundance in a seasonally wet suburban environment

    NASA Astrophysics Data System (ADS)

    Trawinski, P. R.; Mackay, D. S.

    2009-03-01

    The objective of this study is to quantify and model spatial dependence in mosquito vector populations and develop predictions for unsampled locations using geostatistics. Mosquito control program trap sites are often located too far apart to detect spatial dependence but the results show that integration of spatial data over time for Cx. pipiens-restuans and according to meteorological conditions for Ae. vexans enables spatial analysis of sparse sample data. This study shows that mosquito abundance is spatially correlated and that spatial dependence differs between Cx. pipiens-restuans and Ae. vexans mosquitoes.

  17. Detection and Characterization of Stress Symptoms in Forest Vegetation

    NASA Technical Reports Server (NTRS)

    Heller, R. C.

    1971-01-01

    Techniques used at the Pacific Southwest Forest and Range Experiment Station to detect advanced and previsual symptoms of vegetative stress are discussed. Stresses caused by bark beetles in coniferous stands of timber are emphasized because beetles induce stress more rapidly than most other destructive agents. Bark beetles are also the most damaging forest insects in the United States. In the work on stress symptoms, there are two primary objectives: (1) to learn the best combination of films, scales, and filters to detect and locate injured trees from aircraft and spacecraft, and (2) to learn if stressed trees can be detected before visual symptoms of decline occur. Equipment and techniques used in a study of the epidemic of the Black Hills bark beetle are described.

  18. Detection of Delamination in Composite Beams Using Broadband Acoustic Emission Signatures

    NASA Technical Reports Server (NTRS)

    Okafor, A. C.; Chandrashekhara, K.; Jiang, Y. P.

    1996-01-01

    Delamination in composite structure may be caused by imperfections introduced during the manufacturing process or by impact loads by foreign objects during the operational life. There are some nondestructive evaluation methods to detect delamination in composite structures such as x-radiography, ultrasonic testing, and thermal/infrared inspection. These methods are expensive and hard to use for on line detection. Acoustic emission testing can monitor the material under test even under the presence of noise generated under load. It has been used extensively in proof-testing of fiberglass pressure vessels and beams. In the present work, experimental studies are conducted to investigate the use of broadband acoustic emission signatures to detect delaminations in composite beams. Glass/epoxy beam specimens with full width, prescribed delamination sizes of 2 inches and 4 inches are investigated. The prescribed delamination is produced by inserting Teflon film between laminae during the fabrication of composite laminate. The objectives of this research is to develop a method for predicting delamination size and location in laminated composite beams by combining smart materials concept and broadband AE analysis techniques. More specifically, a piezoceramic (PZT) patch is bonded on the surface of composite beams and used as a pulser. The piezoceramic patch simulates the AE wave source as a 3 cycles, 50KHz, burst sine wave. One broadband AE sensor is fixed near the PZT patch to measure the AE wave near the AE source. A second broadband AE sensor, which is used as a receiver, is scanned along the composite beams at 0.25 inch step to measure propagation of AE wave along the composite beams. The acquired AE waveform is digitized and processed. Signal strength, signal energy, cross-correlation of AE waveforms, and tracking of specific cycle of AE waveforms are used to detect delamination size and location.

  19. Localization of interictal epileptic spikes with MEG: optimization of an automated beamformer screening method (SAMepi) in a diverse epilepsy population

    PubMed Central

    Scott, Jonathan M.; Robinson, Stephen E.; Holroyd, Tom; Coppola, Richard; Sato, Susumu; Inati, Sara K.

    2016-01-01

    OBJECTIVE To describe and optimize an automated beamforming technique followed by identification of locations with excess kurtosis (g2) for efficient detection and localization of interictal spikes in medically refractory epilepsy patients. METHODS Synthetic Aperture Magnetometry with g2 averaged over a sliding time window (SAMepi) was performed in 7 focal epilepsy patients and 5 healthy volunteers. The effect of varied window lengths on detection of spiking activity was evaluated. RESULTS Sliding window lengths of 0.5–10 seconds performed similarly, with 0.5 and 1 second windows detecting spiking activity in one of the 3 virtual sensor locations with highest kurtosis. These locations were concordant with the region of eventual surgical resection in these 7 patients who remained seizure free at one year. Average g2 values increased with increasing sliding window length in all subjects. In healthy volunteers kurtosis values stabilized in datasets longer than two minutes. CONCLUSIONS SAMepi using g2 averaged over 1 second sliding time windows in datasets of at least 2 minutes duration reliably identified interictal spiking and the presumed seizure focus in these 7 patients. Screening the 5 locations with highest kurtosis values for spiking activity is an efficient and accurate technique for localizing interictal activity using MEG. SIGNIFICANCE SAMepi should be applied using the parameter values and procedure described for optimal detection and localization of interictal spikes. Use of this screening procedure could significantly improve the efficiency of MEG analysis if clinically validated. PMID:27760068

  20. Effect of space exposure of some epoxy matrix composites on their thermal expansion and mechanical properties (A0138-8)

    NASA Technical Reports Server (NTRS)

    Elberg, R.

    1984-01-01

    This experiment has three objectives. The first and main objective is to detect a possible variation in the coefficient of thermal expansion of composite samples during a 1-year exposure to the near-Earth orbital environment. A second objective is to detect a possible change in the mechanical integrity of composite products, both simple elements and honeycomb sandwich assemblies. A third objective is to compare the behavior of two epoxy resins commonly used in space structural production. The experimental approach is to passively expose samples of epoxy matrix composite materials to the space environment and to compare preflight and postflight measurements of mechanical properties. The experiment will be located in one of the three FRECOPA (French cooperative payload) boxes in a 12-in.-deep peripheral tray that contains nine other experiments from France. The FRECOPA box will protect the samples from contamination during the launch and reentry phases of the mission. The coefficients of thermal expansion are measured on Earth before and after space exposure.

  1. Ground Testing of Prototype Hardware and Processing Algorithms for a Wide Area Space Surveillance System (WASSS)

    NASA Astrophysics Data System (ADS)

    Goldstein, N.; Dressler, R. A.; Richtsmeier, S. S.; McLean, J.; Dao, P. D.; Murray-Krezan, J.; Fulcoly, D. O.

    2013-09-01

    Recent ground testing of a wide area camera system and automated star removal algorithms has demonstrated the potential to detect, quantify, and track deep space objects using small aperture cameras and on-board processors. The camera system, which was originally developed for a space-based Wide Area Space Surveillance System (WASSS), operates in a fixed-stare mode, continuously monitoring a wide swath of space and differentiating celestial objects from satellites based on differential motion across the field of view. It would have greatest utility in a LEO orbit to provide automated and continuous monitoring of deep space with high refresh rates, and with particular emphasis on the GEO belt and GEO transfer space. Continuous monitoring allows a concept of change detection and custody maintenance not possible with existing sensors. The detection approach is equally applicable to Earth-based sensor systems. A distributed system of such sensors, either Earth-based, or space-based, could provide automated, persistent night-time monitoring of all of deep space. The continuous monitoring provides a daily record of the light curves of all GEO objects above a certain brightness within the field of view. The daily updates of satellite light curves offers a means to identify specific satellites, to note changes in orientation and operational mode, and to queue other SSA assets for higher resolution queries. The data processing approach may also be applied to larger-aperture, higher resolution camera systems to extend the sensitivity towards dimmer objects. In order to demonstrate the utility of the WASSS system and data processing, a ground based field test was conducted in October 2012. We report here the results of the observations made at Magdalena Ridge Observatory using the prototype WASSS camera, which has a 4×60° field-of-view , <0.05° resolution, a 2.8 cm2 aperture, and the ability to view within 4° of the sun. A single camera pointed at the GEO belt provided a continuous night-long record of the intensity and location of more than 50 GEO objects detected within the camera's 60-degree field-of-view, with a detection sensitivity similar to the camera's shot noise limit of Mv=13.7. Performance is anticipated to scale with aperture area, allowing the detection of dimmer objects with larger-aperture cameras. The sensitivity of the system depends on multi-frame averaging and an image processing algorithm that exploits the different angular velocities of celestial objects and SOs. Principal Components Analysis (PCA) is used to filter out all objects moving with the velocity of the celestial frame of reference. The resulting filtered images are projected back into an Earth-centered frame of reference, or into any other relevant frame of reference, and co-added to form a series of images of the GEO objects as a function of time. The PCA approach not only removes the celestial background, but it also removes systematic variations in system calibration, sensor pointing, and atmospheric conditions. The resulting images are shot-noise limited, and can be exploited to automatically identify deep space objects, produce approximate state vectors, and track their locations and intensities as a function of time.

  2. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  3. Role of Computer Aided Diagnosis (CAD) in the detection of pulmonary nodules on 64 row multi detector computed tomography

    PubMed Central

    Prakashini, K; Babu, Satish; Rajgopal, KV; Kokila, K Raja

    2016-01-01

    Aims and Objectives: To determine the overall performance of an existing CAD algorithm with thin-section computed tomography (CT) in the detection of pulmonary nodules and to evaluate detection sensitivity at a varying range of nodule density, size, and location. Materials and Methods: A cross-sectional prospective study was conducted on 20 patients with 322 suspected nodules who underwent diagnostic chest imaging using 64-row multi-detector CT. The examinations were evaluated on reconstructed images of 1.4 mm thickness and 0.7 mm interval. Detection of pulmonary nodules, initially by a radiologist of 2 years experience (RAD) and later by CAD lung nodule software was assessed. Then, CAD nodule candidates were accepted or rejected accordingly. Detected nodules were classified based on their size, density, and location. The performance of the RAD and CAD system was compared with the gold standard that is true nodules confirmed by consensus of senior RAD and CAD together. The overall sensitivity and false-positive (FP) rate of CAD software was calculated. Observations and Results: Of the 322 suspected nodules, 221 were classified as true nodules on the consensus of senior RAD and CAD together. Of the true nodules, the RAD detected 206 (93.2%) and 202 (91.4%) by the CAD. CAD and RAD together picked up more number of nodules than either CAD or RAD alone. Overall sensitivity for nodule detection with the CAD program was 91.4%, and FP detection per patient was 5.5%. The CAD showed comparatively higher sensitivity for nodules of size 4–10 mm (93.4%) and nodules in hilar (100%) and central (96.5%) location when compared to RAD's performance. Conclusion: CAD performance was high in detecting pulmonary nodules including the small size and low-density nodules. CAD even with relatively high FP rate, assists and improves RAD's performance as a second reader, especially for nodules located in the central and hilar region and for small nodules by saving RADs time. PMID:27578931

  4. Automatic detection of Floating Ice at Antarctic Continental Margin from Remotely Sensed Image with Object-oriented Matching

    NASA Astrophysics Data System (ADS)

    Zhao, Z.

    2011-12-01

    Changes in ice sheet and floating ices around that have great significance for global change research. In the context of global warming, rapidly changing of Antarctic continental margin, caving of ice shelves, movement of iceberg are all closely related to climate change and ocean circulation. Using automatic change detection technology to rapid positioning the melting Region of Polar ice sheet and the location of ice drift would not only strong support for Global Change Research but also lay the foundation for establishing early warning mechanism for melting of the polar ice and Ice displacement. This paper proposed an automatic change detection method using object-based segmentation technology. The process includes three parts: ice extraction using image segmentation, object-baed ice tracking, change detection based on similarity matching. An approach based on similarity matching of eigenvector is proposed in this paper, which used area, perimeter, Hausdorff distance, contour, shape and other information of each ice-object. Different time of LANDSAT ETM+ data, Chinese environment disaster satellite HJ1B date, MODIS 1B date are used to detect changes of Floating ice at Antarctic continental margin respectively. We select different time of ETM+ data(January 7, 2003 and January 16, 2003) with the area around Antarctic continental margin near the Lazarev Bay, which is from 70.27454853 degrees south latitude, longitude 12.38573410 degrees to 71.44474167 degrees south latitude, longitude 10.39252222 degrees,included 11628 sq km of Antarctic continental margin area, as a sample. Then we can obtain the area of floating ices reduced 371km2, and the number of them reduced 402 during the time. In addition, the changes of all the floating ices around the margin region of Antarctic within 1200 km are detected using MODIS 1B data. During the time from January 1, 2008 to January 7, 2008, the floating ice area decreased by 21644732 km2, and the number of them reduced by 83080. The results show that the object-based information extraction algorithm can obtain more precise details of a single object, while the change detection method based on similarity matching can effectively tracking the change of floating ice.

  5. Effective method for detecting regions of given colors and the features of the region surfaces

    NASA Astrophysics Data System (ADS)

    Gong, Yihong; Zhang, HongJiang

    1994-03-01

    Color can be used as a very important cue for image recognition. In industrial and commercial areas, color is widely used as a trademark or identifying feature in objects, such as packaged goods, advertising signs, etc. In image database systems, one may retrieve an image of interest by specifying prominent colors and their locations in the image (image retrieval by contents). These facts enable us to detect or identify a target object using colors. However, this task depends mainly on how effectively we can identify a color and detect regions of the given color under possibly non-uniform illumination conditions such as shade, highlight, and strong contrast. In this paper, we present an effective method to detect regions matching given colors, along with the features of the region surfaces. We adopt the HVC color coordinates in the method because of its ability of completely separating the luminant and chromatic components of colors. Three basis functions functionally serving as the low-pass, high-pass, and band-pass filters, respectively, are introduced.

  6. Salience from the decision perspective: You know where it is before you know it is there.

    PubMed

    Zehetleitner, Michael; Müller, Hermann J

    2010-12-31

    In visual search for feature contrast ("odd-one-out") singletons, identical manipulations of salience, whether by varying target-distractor similarity or dimensional redundancy of target definition, had smaller effects on reaction times (RTs) for binary localization decisions than for yes/no detection decisions. According to formal models of binary decisions, identical differences in drift rates would yield larger RT differences for slow than for fast decisions. From this principle and the present findings, it follows that decisions on the presence of feature contrast singletons are slower than decisions on their location. This is at variance with two classes of standard models of visual search and object recognition that assume a serial cascade of first detection, then localization and identification of a target object, but also inconsistent with models assuming that as soon as a target is detected all its properties, spatial as well as non-spatial (e.g., its category), are available immediately. As an alternative, we propose a model of detection and localization tasks based on random walk processes, which can account for the present findings.

  7. Drogue detection for vision-based autonomous aerial refueling via low rank and sparse decomposition with multiple features

    NASA Astrophysics Data System (ADS)

    Gao, Shibo; Cheng, Yongmei; Song, Chunhua

    2013-09-01

    The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.

  8. Object-Based Classification and Change Detection of Hokkaido, Japan

    NASA Astrophysics Data System (ADS)

    Park, J. G.; Harada, I.; Kwak, Y.

    2016-06-01

    Topography and geology are factors to characterize the distribution of natural vegetation. Topographic contour is particularly influential on the living conditions of plants such as soil moisture, sunlight, and windiness. Vegetation associations having similar characteristics are present in locations having similar topographic conditions unless natural disturbances such as landslides and forest fires or artificial disturbances such as deforestation and man-made plantation bring about changes in such conditions. We developed a vegetation map of Japan using an object-based segmentation approach with topographic information (elevation, slope, slope direction) that is closely related to the distribution of vegetation. The results found that the object-based classification is more effective to produce a vegetation map than the pixel-based classification.

  9. Real-time edge tracking using a tactile sensor

    NASA Technical Reports Server (NTRS)

    Berger, Alan D.; Volpe, Richard; Khosla, Pradeep K.

    1989-01-01

    Object recognition through the use of input from multiple sensors is an important aspect of an autonomous manipulation system. In tactile object recognition, it is necessary to determine the location and orientation of object edges and surfaces. A controller is proposed that utilizes a tactile sensor in the feedback loop of a manipulator to track along edges. In the control system, the data from the tactile sensor is first processed to find edges. The parameters of these edges are then used to generate a control signal to a hybrid controller. Theory is presented for tactile edge detection and an edge tracking controller. In addition, experimental verification of the edge tracking controller is presented.

  10. Automatic trajectory measurement of large numbers of crowded objects

    NASA Astrophysics Data System (ADS)

    Li, Hui; Liu, Ye; Chen, Yan Qiu

    2013-06-01

    Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.

  11. Ultralow-dose, feedback imaging with laser-Compton X-ray and laser-Compton gamma ray sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barty, Christopher P. J.

    Ultralow-dose, x-ray or gamma-ray imaging is based on fast, electronic control of the output of a laser-Compton x-ray or gamma-ray source (LCXS or LCGS). X-ray or gamma-ray shadowgraphs are constructed one (or a few) pixel(s) at a time by monitoring the LCXS or LCGS beam energy required at each pixel of the object to achieve a threshold level of detectability at the detector. An example provides that once the threshold for detection is reached, an electronic or optical signal is sent to the LCXS/LCGS that enables a fast optical switch that diverts, either in space or time the laser pulsesmore » used to create Compton photons. In this way, one prevents the object from being exposed to any further Compton x-rays or gamma-rays until either the laser-Compton beam or the object are moved so that a new pixel location may be illumination.« less

  12. Micro-orbits in a many-brane model and deviations from Newton's 1/r^2 law

    NASA Astrophysics Data System (ADS)

    Donini, A.; Marimón, S. G.

    2016-12-01

    We consider a five-dimensional model with geometry M = M_4 × S_1, with compactification radius R. The Standard Model particles are localized on a brane located at y=0, with identical branes localized at different points in the extra dimension. Objects located on our brane can orbit around objects located on a brane at a distance d=y/R, with an orbit and a period significantly different from the standard Newtonian ones. We study the kinematical properties of the orbits, finding that it is possible to distinguish one motion from the other in a large region of the initial conditions parameter space. This is a warm-up to study if a SM-like mass distribution on one (or more) distant brane(s) may represent a possible dark matter candidate. After using the same technique to the study of orbits of objects lying on the same brane (d=0), we apply this method to the detection of generic deviations from the inverse-square Newton law. We propose a possible experimental setup to look for departures from Newtonian motion in the micro-world, finding that an order of magnitude improvement on present bounds can be attained at the 95% CL under reasonable assumptions.

  13. Eliminating Inhibition of Return by Changing Salient Non-spatial Attributes in a Complex Environment

    PubMed Central

    Hu, Frank K; Samuel, Arthur G.; Chan, Agnes S.

    2010-01-01

    Inhibition of Return (IOR) occurs when a target is preceded by an irrelevant stimulus (cue) at the same location: Target detection is slowed, relative to uncued locations. In the present study, we used relatively complex displays to examine the effect of repetition of nonspatial attributes. For both color and shape, attribute repetition produced a robust inhibitory effect that followed a time course similar to that for location-based IOR. However, the effect only occurred when the target shared both the feature (i.e., color or shape) and location with the cue; this constraint implicates a primary role for location. The data are consistent with the idea that the system integrates consecutive stimuli into a single object file when attributes repeat, hindering detection of the second stimulus. The results are also consistent with an interpretation of IOR as a form of habituation, with greater habituation occurring with increasing featural overlap of a repeated stimulus. Critically, both of these interpretations bring the IOR effect within more general approaches to attention and perception, rather than requiring a specialized process with a limited function. In this view, there is no process specifically designed to inhibit return, suggesting that “IOR” may be the wrong framing of inhibitory repetition effects. Instead, we suggest that repetition of stimulus properties can interfere with the ability to focus attention on the aspects of a complex display that are needed to detect the occurrence of the target stimulus; this is a failure of activation, not an inhibition of processing. PMID:21171801

  14. Three dimensional time reversal optical tomography

    NASA Astrophysics Data System (ADS)

    Wu, Binlin; Cai, W.; Alrubaiee, M.; Xu, M.; Gayen, S. K.

    2011-03-01

    Time reversal optical tomography (TROT) approach is used to detect and locate absorptive targets embedded in a highly scattering turbid medium to assess its potential in breast cancer detection. TROT experimental arrangement uses multi-source probing and multi-detector signal acquisition and Multiple-Signal-Classification (MUSIC) algorithm for target location retrieval. Light transport from multiple sources through the intervening medium with embedded targets to the detectors is represented by a response matrix constructed using experimental data. A TR matrix is formed by multiplying the response matrix by its transpose. The eigenvectors with leading non-zero eigenvalues of the TR matrix correspond to embedded objects. The approach was used to: (a) obtain the location and spatial resolution of an absorptive target as a function of its axial position between the source and detector planes; and (b) study variation in spatial resolution of two targets at the same axial position but different lateral positions. The target(s) were glass sphere(s) of diameter ~9 mm filled with ink (absorber) embedded in a 60 mm-thick slab of Intralipid-20% suspension in water with an absorption coefficient μa ~ 0.003 mm-1 and a transport mean free path lt ~ 1 mm at 790 nm, which emulate the average values of those parameters for human breast tissue. The spatial resolution and accuracy of target location depended on axial position, and target contrast relative to the background. Both the targets could be resolved and located even when they were only 4-mm apart. The TROT approach is fast, accurate, and has the potential to be useful in breast cancer detection and localization.

  15. Foreign Object Damage Identification in Turbine Engines

    NASA Technical Reports Server (NTRS)

    Strack, William; Zhang, Desheng; Turso, James; Pavlik, William; Lopez, Isaac

    2005-01-01

    This report summarizes the collective work of a five-person team from different organizations examining the problem of detecting foreign object damage (FOD) events in turbofan engines from gas path thermodynamic and bearing accelerometer sensors, and determining the severity of damage to each component (diagnosis). Several detection and diagnostic approaches were investigated and a software tool (FODID) was developed to assist researchers detect/diagnose FOD events. These approaches include (1) fan efficiency deviation computed from upstream and downstream temperature/ pressure measurements, (2) gas path weighted least squares estimation of component health parameter deficiencies, (3) Kalman filter estimation of component health parameters, and (4) use of structural vibration signal processing to detect both large and small FOD events. The last three of these approaches require a significant amount of computation in conjunction with a physics-based analytic model of the underlying phenomenon the NPSS thermodynamic cycle code for approaches 1 to 3 and the DyRoBeS reduced-order rotor dynamics code for approach 4. A potential application of the FODID software tool, in addition to its detection/diagnosis role, is using its sensitivity results to help identify the best types of sensors and their optimum locations within the gas path, and similarly for bearing accelerometers.

  16. Novel images and novel locations of familiar images as sensitive translational cognitive tests in humans.

    PubMed

    Raber, Jacob

    2015-05-15

    Object recognition is a sensitive cognitive test to detect effects of genetic and environmental factors on cognition in rodents. There are various versions of object recognition that have been used since the original test was reported by Ennaceur and Delacour in 1988. There are nonhuman primate and human primate versions of object recognition as well, allowing cross-species comparisons. As no language is required for test performance, object recognition is a very valuable test for human research studies in distinct parts of the world, including areas where there might be less years of formal education. The main focus of this review is to illustrate how object recognition can be used to assess cognition in humans under normal physiological and neurological conditions. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Two applications of time reversal mirrors: seismic radio and seismic radar.

    PubMed

    Hanafy, Sherif M; Schuster, Gerard T

    2011-10-01

    Two seismic applications of time reversal mirrors (TRMs) are introduced and tested with field experiments. The first one is sending, receiving, and decoding coded messages similar to a radio except seismic waves are used. The second one is, similar to radar surveillance, detecting and tracking a moving object(s) in a remote area, including the determination of the objects speed of movement. Both applications require the prior recording of calibration Green's functions in the area of interest. This reference Green's function will be used as a codebook to decrypt the coded message in the first application and as a moving sensor for the second application. Field tests show that seismic radar can detect the moving coordinates (x(t), y(t), z(t)) of a person running through a calibration site. This information also allows for a calculation of his velocity as a function of location. Results with the seismic radio are successful in seismically detecting and decoding coded pulses produced by a hammer. Both seismic radio and radar are highly robust to signals in high noise environments due to the super-stacking property of TRMs. © 2011 Acoustical Society of America

  18. Gender differences in memory for objects and their locations: a study on automatic versus controlled encoding and retrieval contexts.

    PubMed

    De Goede, Maartje; Postma, Albert

    2008-04-01

    Object-location memory is the only spatial task where female subjects have been shown to outperform males. This result is not consistent across all studies, and may be due to the combination of the multi-component structure of object location memory with the conditions under which different studies were done. Possible gender differences in object location memory and its component object identity memory were assessed in the present study. In order to disentangle these two components, an object location memory task (in which objects had to be relocated in daily environments), and a separate object identity recognition task were carried out. This study also focused on the conditions under which object locations were encoded and retrieved. Only half of the participants were aware of the fact that object locations had to be retrieved later on. Moreover, by applying the 'process dissociation procedure' to the object location memory assessments and the 'remember-know' paradigm to the object identity measure, the amount of explicit (conscious) and implicit (unconscious) retrieval was estimated for each component. In general, females performed better than males on the object location memory task. However, when controlled for object identity memory, females no longer outperformed males, whereas they did not obtain a higher general object identity memory score, nor did they have more explicit or implicit recollection of the object identities. These complicated effects might stem from a difference between males and females, in the way locations or associations between objects and locations are retrieved. In general, participants had more explicit (conscious) recollection than implicit (unconscious) recollection. No effect of encoding context was found, nor any interaction effect of gender, encoding and retrieval context.

  19. Positron emission mammography (PEM): Effect of activity concentration, object size, and object contrast on phantom lesion detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MacDonald, Lawrence R.; Wang, Carolyn L.; Eissa, Marna

    2012-10-15

    Purpose: To characterize the relationship between lesion detection sensitivity and injected activity as a function of lesion size and contrast on the PEM (positron emission mammography) Flex Solo II scanner using phantom experiments. Methods: Phantom lesions (spheres 4, 8, 12, 16, and 20 mm diameter) were randomly located in uniform background. Sphere activity concentrations were 3 to 21 times the background activity concentration (BGc). BGc was a surrogate for injected activity; BGc ranged from 0.44-4.1 kBq/mL, corresponding to 46-400 MBq injections. Seven radiologists read 108 images containing zero, one, or two spheres. Readers used a 5-point confidence scale to scoremore » the presence of spheres. Results: Sensitivity was 100% for lesions {>=}12 mm under all conditions except for one 12 mm sphere with the lowest contrast and lowest BGc (60% sensitivity). Sensitivity was 100% for 8 mm spheres when either contrast or BGc was high, and 100% for 4 mm spheres only when both contrast and BGc were highest. Sphere contrast recovery coefficients (CRC) were 49%, 34%, 26%, 14%, and 2.8% for the largest to smallest spheres. Cumulative specificity was 98%. Conclusions: Phantom lesion detection sensitivity depends more on sphere size and contrast than on BGc. Detection sensitivity remained {>=}90% for injected activities as low as 100 MBq, for lesions {>=}8 mm. Low CRC in 4 mm objects results in moderate detection sensitivity even for 400 MBq injected activity, making it impractical to optimize injected activity for such lesions. Low CRC indicates that when lesions <8 mm are observed on PEM images they are highly tracer avid with greater potential of clinical significance. High specificity (98%) suggests that image statistical noise does not lead to false positive findings. These results apply to the 85 mm thick object used to obtain them; lesion detectability should be better (worse) for thinner (thicker) objects based on the reduced (increased) influence of photon attenuation.« less

  20. Detection of Unknown Crypts under the Floor in the Holy Trinity Church (Dominican Monastery) in Krakow, Poland

    NASA Astrophysics Data System (ADS)

    Strzępowicz, Anna; Łyskowski, Mikołaj; Ziętek, Jerzy; Tomecka-Suchoń, Sylwia

    2018-03-01

    The GPR surveying method belongs to non-invasive and quick geophysical methods, applied also in archaeological prospection. It allows for detecting archaeological artefacts buried under historical layers, and also those which can be found within buildings of historical value. Most commonly, just as in this particular case, it is used in churches, where other non-invasive localisation methods cannot be applied. In a majority of cases, surveys bring about highly positive results, enabling the site and size of a specific object to be indicated. A good example are the results obtained from the measurements carried out in the Basilica of Holy Trinity, belonging to the Dominican Monastery in Krakow. They allowed for confirming the location of the already existing crypts and for indicating so-far unidentified objects.

  1. Automatic detection and classification of damage zone(s) for incorporating in digital image correlation technique

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Sudipta; Deb, Debasis

    2016-07-01

    Digital image correlation (DIC) is a technique developed for monitoring surface deformation/displacement of an object under loading conditions. This method is further refined to make it capable of handling discontinuities on the surface of the sample. A damage zone is referred to a surface area fractured and opened in due course of loading. In this study, an algorithm is presented to automatically detect multiple damage zones in deformed image. The algorithm identifies the pixels located inside these zones and eliminate them from FEM-DIC processes. The proposed algorithm is successfully implemented on several damaged samples to estimate displacement fields of an object under loading conditions. This study shows that displacement fields represent the damage conditions reasonably well as compared to regular FEM-DIC technique without considering the damage zones.

  2. Integrating visual learning within a model-based ATR system

    NASA Astrophysics Data System (ADS)

    Carlotto, Mark; Nebrich, Mark

    2017-05-01

    Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.

  3. Coronary artery stenosis detection with holographic display of 3D angiograms

    NASA Astrophysics Data System (ADS)

    Ritman, Erik L.; Schwanke, Todd D.; Simari, Robert D.; Schwartz, Robert S.; Thomas, Paul J.

    1995-05-01

    The objective of this study was to establish the accuracy of an holographic display approach for detection of stenoses in coronary arteries. The rationale for using an holographic display approach is that multiple angles of view of the coronary arteriogram are provided by a single 'x-ray'-like film, backlit by a special light box. This should be more convenient in that the viewer does not have to page back and forth through a cine angiogram to obtain the multiple angles of view. The method used to test this technique involved viewing 100 3D coronary angiograms. These images were generated from the 3D angiographic images of nine normal coronary arterial trees generated with the Dynamic Spatial Reconstructor (DSR) fast CT scanner. Using our image processing programs, the image of the coronary artery lumen was locally 'narrowed' by an amount and length and at a location determined by a random look-up table. Two independent, blinded, experienced angiographers viewed the holographic displays of these angiograms and recorded their confidence about the presence, location, and severity of the stenoses. This procedure evaluates the sensitivity and specificity of the detection of coronary artery stenoses as a function of the severity, size, and location along the arteries.

  4. Two visual systems in monitoring of dynamic traffic: effects of visual disruption.

    PubMed

    Zheng, Xianjun Sam; McConkie, George W

    2010-05-01

    Studies from neurophysiology and neuropsychology provide support for two separate object- and location-based visual systems, ventral and dorsal. In the driving context, a study was conducted using a change detection paradigm to explore drivers' ability to monitor the dynamic traffic flow, and the effects of visual disruption on these two visual systems. While driving, a discrete change, such as vehicle location, color, or identity, was occasionally made in one of the vehicles on the road ahead of the driver. Experiment results show that without visual disruption, all changes were detected very well; yet, these equally perceivable changes were disrupted differently by a brief blank display (150 ms): the detection of location changes was especially reduced. The disruption effects were also bigger for the parked vehicle compared to the moving ones. The findings support the different roles for two visual systems in monitoring the dynamic traffic: the "where", dorsal system, tracks vehicle spatiotemporal information on perceptual level, encoding information in a coarse and transient manner; whereas the "what", ventral system, monitors vehicles' featural information, encoding information more accurately and robustly. Both systems work together contributing to the driver's situation awareness of traffic. Benefits and limitations of using the driving simulation are also discussed. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  5. Detection, Location and Grasping Objects Using a Stereo Sensor on UAV in Outdoor Environments

    PubMed Central

    Ramon Soria, Pablo; Arrue, Begoña C.; Ollero, Anibal

    2017-01-01

    The article presents a vision system for the autonomous grasping of objects with Unmanned Aerial Vehicles (UAVs) in real time. Giving UAVs the capability to manipulate objects vastly extends their applications, as they are capable of accessing places that are difficult to reach or even unreachable for human beings. This work is focused on the grasping of known objects based on feature models. The system runs in an on-board computer on a UAV equipped with a stereo camera and a robotic arm. The algorithm learns a feature-based model in an offline stage, then it is used online for detection of the targeted object and estimation of its position. This feature-based model was proved to be robust to both occlusions and the presence of outliers. The use of stereo cameras improves the learning stage, providing 3D information and helping to filter features in the online stage. An experimental system was derived using a rotary-wing UAV and a small manipulator for final proof of concept. The robotic arm is designed with three degrees of freedom and is lightweight due to payload limitations of the UAV. The system has been validated with different objects, both indoors and outdoors. PMID:28067851

  6. The first prototype of chromatic pupillometer for objective perimetry in retinal degeneration patients

    NASA Astrophysics Data System (ADS)

    Rotenstreich, Ygal; Chibel, Ron; Haj Yahia, Soad; Achiron, Asaf; Mahajna, Mohamad; Belkin, Michael; Sher, Ifat

    2015-03-01

    We recently demonstrated the feasibility of quantifying pupil responses (PR) to multifocal chromatic light stimuli for objectively assessing visual field (VF). Here we assessed a second-generation chromatic multifocal pupillometer device with 76 LEDs of 18 degree visual field and a smaller spot size (2mm diameter), aimed of achieving better perimetric resolution. A computerized infrared pupillometer was used to record PR to short- and long-wavelength stimuli (peak 485 nm and 640 nm, respectively) presented by 76 LEDs, 1.8mm spot size, at light intensities of 10-1000 cd/m2 at different points of the 18 degree VF. PR amplitude was measured in 11 retinitis pigmentosa (RP) patients and 20 normal agedmatched controls. RP patients demonstrated statistically significant reduced pupil contraction amplitude in majority of perimetric locations under testing conditions that emphasized rod contribution (short-wavelength stimuli at 200 cd/m2) in peripheral locations (p<0.05). By contrast, the amplitude of pupillary responses under testing conditions that emphasized cone cell contribution (long-wavelength stimuli at 1000 cd/m2) were not significantly different between the groups in majority of perimetric locations, particularly in central locations. Minimal pupil contraction was recorded in areas that were non-detected by chromatic Goldmann. This study demonstrates the feasibility of using pupillometerbased chromatic perimetry for objectively assessing VF defects and retinal function in patients with retinal degeneration. This method may be used to distinguish between the damaged cells underlying the VF defect.

  7. Small Business Innovation Research (SBIR) Program. FY 1991 Program Solicitation 91.2

    DTIC Science & Technology

    1991-07-01

    Based Robotic Control Systems Technology A91-034 Passive Sensor Self- Interference Cancellation A91-035 High Performance Propelling Charges A91-036...laboratory tests. A91-034 TITLE: Passive Sensor Self- Interference Cancellation CATEGORY: Exploratory Development OBJECTIVE: Develop practical and effective...acoustic sensor to detect, classify, identify, and locate targets is ARMY 19 degraded by own-platform noise and local interference . Elementary

  8. Measurement of volatile organic compounds emitted in libraries and archives: an inferential indicator of paper decay?

    PubMed Central

    2012-01-01

    Background A sampling campaign of indoor air was conducted to assess the typical concentration of indoor air pollutants in 8 National Libraries and Archives across the U.K. and Ireland. At each site, two locations were chosen that contained various objects in the collection (paper, parchment, microfilm, photographic material etc.) and one location was chosen to act as a sampling reference location (placed in a corridor or entrance hallway). Results Of the locations surveyed, no measurable levels of sulfur dioxide were detected and low formaldehyde vapour (< 18 μg m-3) was measured throughout. Acetic and formic acids were measured in all locations with, for the most part, higher acetic acid levels in areas with objects compared to reference locations. A large variety of volatile organic compounds (VOCs) was measured in all locations, in variable concentrations, however furfural was the only VOC to be identified consistently at higher concentration in locations with paper-based collections, compared to those locations without objects. To cross-reference the sampling data with VOCs emitted directly from books, further studies were conducted to assess emissions from paper using solid phase microextraction (SPME) fibres and a newly developed method of analysis; collection of VOCs onto a polydimethylsiloxane (PDMS) elastomer strip. Conclusions In this study acetic acid and furfural levels were consistently higher in concentration when measured in locations which contained paper-based items. It is therefore suggested that both acetic acid and furfural (possibly also trimethylbenzenes, ethyltoluene, decane and camphor) may be present in the indoor atmosphere as a result of cellulose degradation and together may act as an inferential non-invasive marker for the deterioration of paper. Direct VOC sampling was successfully achieved using SPME fibres and analytes found in the indoor air were also identified as emissive by-products from paper. Finally a new non-invasive, method of VOC collection using PDMS strips was shown to be an effective, economical and efficient way of examining VOC emissions directly from the pages of a book and confirmed that toluene, furfural, benzaldehyde, ethylhexanol, nonanal and decanal were the most concentrated VOCs emitted directly from paper measured in this study. PMID:22587759

  9. An object-based visual attention model for robotic applications.

    PubMed

    Yu, Yuanlong; Mann, George K I; Gosine, Raymond G

    2010-10-01

    By extending integrated competition hypothesis, this paper presents an object-based visual attention model, which selects one object of interest using low-dimensional features, resulting that visual perception starts from a fast attentional selection procedure. The proposed attention model involves seven modules: learning of object representations stored in a long-term memory (LTM), preattentive processing, top-down biasing, bottom-up competition, mediation between top-down and bottom-up ways, generation of saliency maps, and perceptual completion processing. It works in two phases: learning phase and attending phase. In the learning phase, the corresponding object representation is trained statistically when one object is attended. A dual-coding object representation consisting of local and global codings is proposed. Intensity, color, and orientation features are used to build the local coding, and a contour feature is employed to constitute the global coding. In the attending phase, the model preattentively segments the visual field into discrete proto-objects using Gestalt rules at first. If a task-specific object is given, the model recalls the corresponding representation from LTM and deduces the task-relevant feature(s) to evaluate top-down biases. The mediation between automatic bottom-up competition and conscious top-down biasing is then performed to yield a location-based saliency map. By combination of location-based saliency within each proto-object, the proto-object-based saliency is evaluated. The most salient proto-object is selected for attention, and it is finally put into the perceptual completion processing module to yield a complete object region. This model has been applied into distinct tasks of robots: detection of task-specific stationary and moving objects. Experimental results under different conditions are shown to validate this model.

  10. Airplane detection based on fusion framework by combining saliency model with Deep Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Dou, Hao; Sun, Xiao; Li, Bin; Deng, Qianqian; Yang, Xubo; Liu, Di; Tian, Jinwen

    2018-03-01

    Aircraft detection from very high resolution remote sensing images, has gained more increasing interest in recent years due to the successful civil and military applications. However, several problems still exist: 1) how to extract the high-level features of aircraft; 2) locating objects within such a large image is difficult and time consuming; 3) A common problem of multiple resolutions of satellite images still exists. In this paper, inspirited by biological visual mechanism, the fusion detection framework is proposed, which fusing the top-down visual mechanism (deep CNN model) and bottom-up visual mechanism (GBVS) to detect aircraft. Besides, we use multi-scale training method for deep CNN model to solve the problem of multiple resolutions. Experimental results demonstrate that our method can achieve a better detection result than the other methods.

  11. Object-location memory in adults with autism spectrum disorder.

    PubMed

    Ring, Melanie; Gaigg, Sebastian B; Bowler, Dermot M

    2015-10-01

    This study tested implicit and explicit spatial relational memory in Autism Spectrum Disorder (ASD). Participants were asked to study pictures of rooms and pictures of daily objects for which locations were highlighted in the rooms. Participants were later tested for their memory of the object locations either by being asked to place objects back into their original locations or into new locations. Proportions of times when participants choose the previously studied locations for the objects irrespective of the instruction were used to derive indices of explicit and implicit memory [process-dissociation procedure, Jacoby, 1991, 1998]. In addition, participants performed object and location recognition and source memory tasks where they were asked about which locations belonged to the objects and which objects to the locations. The data revealed difficulty for ASD individuals in actively retrieving object locations (explicit memory) but not in subconsciously remembering them (implicit memory). These difficulties cannot be explained by difficulties in memory for objects or locations per se (i.e., the difficulty pertains to object-location relations). Together these observations lend further support to the idea that ASD is characterised by relatively circumscribed difficulties in relational rather than item-specific memory processes and show that these difficulties extend to the domain of spatial information. They also lend further support to the idea that memory difficulties in ASD can be reduced when support is provided at test. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  12. Spatial discrimination deficits as a function of mnemonic interference in aged adults with and without memory impairment.

    PubMed

    Reagh, Zachariah M; Roberts, Jared M; Ly, Maria; DiProspero, Natalie; Murray, Elizabeth; Yassa, Michael A

    2014-03-01

    It is well established that aging is associated with declines in episodic memory. In recent years, an emphasis has emerged on the development of behavioral tasks and the identification of biomarkers that are predictive of cognitive decline in healthy as well as pathological aging. Here, we describe a memory task designed to assess the accuracy of discrimination ability for the locations of objects. Object locations were initially encoded incidentally, and appeared in a single space against a 5 × 7 grid. During retrieval, subjects viewed repeated object-location pairings, displacements of 1, 2, 3, or 4 grid spaces, and maximal corner-to-opposite-corner displacements. Subjects were tasked with judging objects in this second viewing as having retained their original location, or having moved. Performance on a task such as this is thought to rely on the capacity of the individual to perform hippocampus-mediated pattern separation. We report a performance deficit associated with a physically healthy aged group compared to young adults specific to trials with low mnemonic interference. Additionally, for aged adults, performance on the task was correlated with performance on the delayed recall portion of the Rey Auditory Verbal Learning Test (RAVLT), a neuropsychological test sensitive to hippocampal dysfunction. In line with prior work, dividing the aged group into unimpaired and impaired subgroups based on RAVLT Delayed Recall scores yielded clearly distinguishable patterns of performance, with the former subgroup performing comparably to young adults, and the latter subgroup showing generally impaired memory performance even with minimal interference. This study builds on existing tasks used in the field, and contributes a novel paradigm for differentiation of healthy from possible pathological aging, and may thus provide an avenue for early detection of age-related cognitive decline. Copyright © 2013 Wiley Periodicals, Inc.

  13. Image registration of naval IR images

    NASA Astrophysics Data System (ADS)

    Rodland, Arne J.

    1996-06-01

    In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.

  14. Language and memory for object location.

    PubMed

    Gudde, Harmen B; Coventry, Kenny R; Engelhardt, Paul E

    2016-08-01

    In three experiments, we investigated the influence of two types of language on memory for object location: demonstratives (this, that) and possessives (my, your). Participants first read instructions containing demonstratives/possessives to place objects at different locations, and then had to recall those object locations (following object removal). Experiments 1 and 2 tested contrasting predictions of two possible accounts of language on object location memory: the Expectation Model (Coventry, Griffiths, & Hamilton, 2014) and the congruence account (Bonfiglioli, Finocchiaro, Gesierich, Rositani, & Vescovi, 2009). In Experiment 3, the role of attention allocation as a possible mechanism was investigated. Results across all three experiments show striking effects of language on object location memory, with the pattern of data supporting the Expectation Model. In this model, the expected location cued by language and the actual location are concatenated leading to (mis)memory for object location, consistent with models of predictive coding (Bar, 2009; Friston, 2003). Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  15. A fast Bayesian approach to discrete object detection in astronomical data sets - PowellSnakes I

    NASA Astrophysics Data System (ADS)

    Carvalho, Pedro; Rocha, Graça; Hobson, M. P.

    2009-03-01

    A new fast Bayesian approach is introduced for the detection of discrete objects immersed in a diffuse background. This new method, called PowellSnakes, speeds up traditional Bayesian techniques by (i) replacing the standard form of the likelihood for the parameters characterizing the discrete objects by an alternative exact form that is much quicker to evaluate; (ii) using a simultaneous multiple minimization code based on Powell's direction set algorithm to locate rapidly the local maxima in the posterior and (iii) deciding whether each located posterior peak corresponds to a real object by performing a Bayesian model selection using an approximate evidence value based on a local Gaussian approximation to the peak. The construction of this Gaussian approximation also provides the covariance matrix of the uncertainties in the derived parameter values for the object in question. This new approach provides a speed up in performance by a factor of `100' as compared to existing Bayesian source extraction methods that use Monte Carlo Markov chain to explore the parameter space, such as that presented by Hobson & McLachlan. The method can be implemented in either real or Fourier space. In the case of objects embedded in a homogeneous random field, working in Fourier space provides a further speed up that takes advantage of the fact that the correlation matrix of the background is circulant. We illustrate the capabilities of the method by applying to some simplified toy models. Furthermore, PowellSnakes has the advantage of consistently defining the threshold for acceptance/rejection based on priors which cannot be said of the frequentist methods. We present here the first implementation of this technique (version I). Further improvements to this implementation are currently under investigation and will be published shortly. The application of the method to realistic simulated Planck observations will be presented in a forthcoming publication.

  16. Meteorological and Environmental Inputs to Aviation Systems

    NASA Technical Reports Server (NTRS)

    Camp, Dennis W. (Editor); Frost, Walter (Editor)

    1988-01-01

    Reports on aviation meteorology, most of them informal, are presented by representatives of the National Weather Service, the Bracknell (England) Meteorological Office, the NOAA Wave Propagation Lab., the Fleet Numerical Oceanography Center, and the Aircraft Owners and Pilots Association. Additional presentations are included on aircraft/lidar turbulence comparison, lightning detection and locating systems, objective detection and forecasting of clear air turbulence, comparative verification between the Generalized Exponential Markov (GEM) Model and official aviation terminal forecasts, the evaluation of the Prototype Regional Observation and Forecast System (PROFS) mesoscale weather products, and the FAA/MIT Lincoln Lab. Doppler Weather Radar Program.

  17. Arcsec source location measurements in gamma-ray astronomy from a lunar observatory

    NASA Astrophysics Data System (ADS)

    Koch, D. G.; Hughes, B. E.

    1990-03-01

    The physical processes typically used in the detection of high energy gamma-rays do not permit good angular resolution, which makes difficult the unambiguous association of discrete gamma-ray sources with objects emitting at other wavelengths. This problem can be overcome by placing gamma-ray detectors on the moon and using the horizon as an occulting edge to achieve arcsec resolution. For the purpose of discussion, this concept is examined for gamma rays above about 20 MeV for which pair production dominates the detection process and locally-generated nuclear gamma rays do not contribute to the background.

  18. A cloud shadow detection method combined with cloud height iteration and spectral analysis for Landsat 8 OLI data

    NASA Astrophysics Data System (ADS)

    Sun, Lin; Liu, Xinyan; Yang, Yikun; Chen, TingTing; Wang, Quan; Zhou, Xueying

    2018-04-01

    Although enhanced over prior Landsat instruments, Landsat 8 OLI can obtain very high cloud detection precisions, but for the detection of cloud shadows, it still faces great challenges. Geometry-based cloud shadow detection methods are considered the most effective and are being improved constantly. The Function of Mask (Fmask) cloud shadow detection method is one of the most representative geometry-based methods that has been used for cloud shadow detection with Landsat 8 OLI. However, the Fmask method estimates cloud height employing fixed temperature rates, which are highly uncertain, and errors of large area cloud shadow detection can be caused by errors in estimations of cloud height. This article improves the geometry-based cloud shadow detection method for Landsat OLI from the following two aspects. (1) Cloud height no longer depends on the brightness temperature of the thermal infrared band but uses a possible dynamic range from 200 m to 12,000 m. In this case, cloud shadow is not a specific location but a possible range. Further analysis was carried out in the possible range based on the spectrum to determine cloud shadow location. This effectively avoids the cloud shadow leakage caused by the error in the height determination of a cloud. (2) Object-based and pixel spectral analyses are combined to detect cloud shadows, which can realize cloud shadow detection from two aspects of target scale and pixel scale. Based on the analysis of the spectral differences between the cloud shadow and typical ground objects, the best cloud shadow detection bands of Landsat 8 OLI were determined. The combined use of spectrum and shape can effectively improve the detection precision of cloud shadows produced by thin clouds. Several cloud shadow detection experiments were carried out, and the results were verified by the results of artificial recognition. The results of these experiments indicated that this method can identify cloud shadows in different regions with correct accuracy exceeding 80%, approximately 5% of the areas were wrongly identified, and approximately 10% of the cloud shadow areas were missing. The accuracy of this method is obviously higher than the recognition accuracy of Fmask, which has correct accuracy lower than 60%, and the missing recognition is approximately 40%.

  19. The fundamentals of average local variance--Part I: Detecting regular patterns.

    PubMed

    Bøcher, Peder Klith; McCloy, Keith R

    2006-02-01

    The method of average local variance (ALV) computes the mean of the standard deviation values derived for a 3 x 3 moving window on a successively coarsened image to produce a function of ALV versus spatial resolution. In developing ALV, the authors used approximately a doubling of the pixel size at each coarsening of the image. They hypothesized that ALV is low when the pixel size is smaller than the size of scene objects because the pixels on the object will have similar response values. When the pixel and objects are of similar size, they will tend to vary in response and the ALV values will increase. As the size of pixels increase further, more objects will be contained in a single pixel and ALV will decrease. The authors showed that various cover types produced single peak ALV functions that inexplicitly peaked when the pixel size was 1/2 to 3/4 of the object size. This paper reports on work done to explore the characteristics of the various forms of the ALV function and to understand the location of the peaks that occur in this function. The work was conducted using synthetically generated image data. The investigation showed that the hypothesis originally proposed in is not adequate. A new hypothesis is proposed that the ALV function has peak locations that are related to the geometric size of pattern structures in the scene. These structures are not always the same as scene objects. Only in cases where the size of and separation between scene objects are equal does the ALV function detect the size of the objects. In situations where the distance between scene objects are larger than their size, the ALV function has a peak at the object separation, not at the object size. This work has also shown that multiple object structures of different sizes and distances in the image provide multiple peaks in the ALV function and that some of these structures are not implicitly recognized as such from our perspective. However, the magnitude of these peaks depends on the response mix in the structures, complicating their interpretation and analysis. The analysis of the ALV Function is, thus, more complex than that generally reported in the literature.

  20. Health Monitoring of Composite Material Structures using a Vibrometry Technique

    NASA Technical Reports Server (NTRS)

    Schulz, Mark J.

    1997-01-01

    Large composite material structures such as aircraft and Reusable Launch Vehicles (RLVS) operate in severe environments comprised of vehicle dynamic loads, aerodynamic loads, engine vibration, foreign object impact, lightning strikes, corrosion, and moisture absorption. These structures are susceptible to damage such as delamination, fiber breaking/pullout, matrix cracking, and hygrothermal strain. To ensure human safety and load-bearing integrity, these structures must be inspected to detect and locate often invisible damage and faults before becoming catastrophic. Moreover, nearly all future structures will need some type of in-service inspection technique to increase their useful life and reduce maintenance and overall costs. Possible techniques for monitoring the health and indicating damage on composite structures include: c-scan, thermography, acoustic emissions using piezoceramic actuators or fiber-optic wires with gratings, laser ultrasound, shearography, holography, x-ray, and others. These techniques have limitations in detecting damage that is beneath the surface of the structure, far away from a sensor location, or during operation of the vehicle. The objective of this project is to develop a more global method for damage detection that is based on structural dynamics principles, and can inspect for damage when the structure is subjected to vibratory loads to expose faults that may not be evident by static inspection. A Transmittance Function Monitoring (TFM) method is being developed in this project for ground-based inspection and operational health monitoring of large composite structures as a RLV. A comparison of the features of existing health monitoring approaches and the proposed TFM method is given.

  1. Particle detection, number estimation, and feature measurement in gene transfer studies: optical fractionator stereology integrated with digital image processing and analysis.

    PubMed

    King, Michael A; Scotty, Nicole; Klein, Ronald L; Meyer, Edwin M

    2002-10-01

    Assessing the efficacy of in vivo gene transfer often requires a quantitative determination of the number, size, shape, or histological visualization characteristics of biological objects. The optical fractionator has become a choice stereological method for estimating the number of objects, such as neurons, in a structure, such as a brain subregion. Digital image processing and analytic methods can increase detection sensitivity and quantify structural and/or spectral features located in histological specimens. We describe a hardware and software system that we have developed for conducting the optical fractionator process. A microscope equipped with a video camera and motorized stage and focus controls is interfaced with a desktop computer. The computer contains a combination live video/computer graphics adapter with a video frame grabber and controls the stage, focus, and video via a commercial imaging software package. Specialized macro programs have been constructed with this software to execute command sequences requisite to the optical fractionator method: defining regions of interest, positioning specimens in a systematic uniform random manner, and stepping through known volumes of tissue for interactive object identification (optical dissectors). The system affords the flexibility to work with count regions that exceed the microscope image field size at low magnifications and to adjust the parameters of the fractionator sampling to best match the demands of particular specimens and object types. Digital image processing can be used to facilitate object detection and identification, and objects that meet criteria for counting can be analyzed for a variety of morphometric and optical properties. Copyright 2002 Elsevier Science (USA)

  2. Response phase mapping of nonlinear joint dynamics using continuous scanning LDV measurement method

    NASA Astrophysics Data System (ADS)

    Di Maio, D.; Bozzo, A.; Peyret, Nicolas

    2016-06-01

    This study aims to present a novel work aimed at locating discrete nonlinearities in mechanical assemblies. The long term objective is to develop a new metric for detecting and locating nonlinearities using Scanning LDV systems (SLDV). This new metric will help to improve the modal updating, or validation, of mechanical assemblies presenting discrete and sparse nonlinearities. It is well established that SLDV systems can scan vibrating structures with high density of measurement points and produc e highly defined Operational Deflection Shapes (ODSs). This paper will present some insights on how to use response phase mapping for locating nonlinearities of a bolted flange. This type of structure presents two types of nonlinearities, which are geometr ical and frictional joints. The interest is focussed on the frictional joints and, therefore, the ability to locate which joint s are responsible for nonlinearity is seen highly valuable for the model validation activities.

  3. Detection of on-surface objects with an underground radiography detector system using cosmic-ray muons

    NASA Astrophysics Data System (ADS)

    Fujii, Hirofumi; Hara, Kazuhiko; Hayashi, Kohei; Kakuno, Hidekazu; Kodama, Hideyo; Nagamine, Kanetada; Sato, Kazuyuki; Sato, Kotaro; Kim, Shin-Hong; Suzuki, Atsuto; Takahashi, Kazuki; Takasaki, Fumihiko

    2017-05-01

    We have developed a compact muon radiography detector to investigate the status of the nuclear debris in the Fukushima Daiichi Reactors. Our previous observation showed that a large portion of the Unit-1 Reactor fuel had fallen to floor level. The detector must be located underground to further investigate the status of the fallen debris. To investigate the performance of muon radiography in such a situation, we observed 2 m cubic iron blocks located on the surface of the ground through different lengths of ground soil. The iron blocks were imaged and their corresponding iron density was derived successfully.

  4. Photonic Paint Developed with Metallic Three-Dimensional Photonic Crystals

    PubMed Central

    Sun, Po; Williams, John D.

    2012-01-01

    This work details the design and simulation of an inconspicuous photonic paint that can be applied onto an object for anticounterfeit and tag, track, and locate (TTL) applications. The paint consists of three-dimensional metallic tilted woodpile photonic crystals embedded into a visible and infrared transparent polymer film, which can be applied to almost any surface. The tilted woodpile photonic crystals are designed with a specific pass band detectable at nearly all incident angles of light. When painted onto a surface, these crystals provide a unique reflective infra-red optical signature that can be easily observed and recorded to verify the location or contents of a package.

  5. Shallow water imaging sonar system for environmental surveying. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-05-01

    The scope of this research is to develop a shallow water sonar system designed to detect and map the location of objects such as hazardous wastes or discarded ordnance in coastal waters. The system will use high frequency wide-bandwidth imaging sonar, mounted on a moving platform towed behind a boat, to detect and identify objects on the sea bottom. Resolved images can be obtained even if the targets are buried in an overlayer of silt. The specific technical objective of this research was to develop and test a prototype system that is capable of (1) scan at high speeds (upmore » to 10m/s), even in shallow water (depth to ten meters), without motion blurring or loss of resolution; (2) produce images of the bottom structure that are detailed enough for unambiguous detection of objects as small as 15cm, even if they are buried up to 30cm deep in silt or sand. The critical technology involved uses an linear FM (LFM) or similar complex waveform, which has a high bandwidth for good range resolution, with a long pulse length for similar Dopper resolution. The lone duration signal deposits more energy on target than a narrower pulse, which increases the signal-to-noise ratio and signal-to-clutter ratio. This in turn allows the use of cheap, lightweight, low power, piezoelectric transducers at the 30--500 kHz range.« less

  6. A Discovery of a Candidate Companion to a Transiting System KOI-94: A Direct Imaging Study for a Possibility of a False Positive

    NASA Technical Reports Server (NTRS)

    Takahashi, Yasuhiro; Narita, Norio; Hirano, Teruyuki; Kuzuhara, Masayuki; Tamura, Motohide; Kudo, Tomoyuki; Kusakabe, Nobuhiko; Hashimoto, Jun; Sato, Bun'ei; Abe, Lyu; hide

    2013-01-01

    We report a discovery of a companion candidate around one of Kepler Objects of Interest (KOIs), KOI-94, and results of our quantitative investigation of the possibility that planetary candidates around KOI-94 are false positives. KOI-94 has a planetary system in which four planetary detections have been reported by Kepler, suggesting that this system is intriguing to study the dynamical evolutions of planets. However, while two of those detections (KOI-94.01 and 03) have been made robust by previous observations, the others (KOI-94.02 and 04) are marginal detections, for which future confirmations with various techniques are required. We have conducted high-contrast direct imaging observations with Subaru/HiCIAO in H band and detected a faint object located at a separation of approximately 0.6 sec from KOI-94. The object has a contrast of approximately 1 × 10(exp -3) in H band, and corresponds to an M type star on the assumption that the object is at the same distance of KOI-94. Based on our analysis, KOI-94.02 is likely to be a real planet because of its transit depth, while KOI-94.04 can be a false positive due to the companion candidate. The success in detecting the companion candidate suggests that high-contrast direct imaging observations are important keys to examine false positives of KOIs. On the other hand, our transit light curve reanalyses lead to a better period estimate of KOI-94.04 than that on the KOI catalogue and show that the planetary candidate has the same limb darkening parameter value as the other planetary candidates in the KOI-94 system, suggesting that KOI-94.04 is also a real planet in the system.

  7. Damage assessment in composite laminates via broadband Lamb wave.

    PubMed

    Gao, Fei; Zeng, Liang; Lin, Jing; Shao, Yongsheng

    2018-05-01

    Time of flight (ToF) based method for damage detection using Lamb waves is widely used. However, due to the energy dissipation of Lamb waves and the non-ignorable size of damage in composite structure, the performance of damage detection is restricted. The objective of this research is to establish an improved method to locate and assess damages in composite structure. To choose appropriate excitation parameters, the propagation characters of Lamb waves in quasi-isotropic composite laminates are firstly studied and the broadband excitation is designed. Subsequently, the pulse compression technique is adopted for energy concentration and high-accuracy distance estimation. On this basis, the gravity center of intersections of path loci is employed for damage localization and the convex envelop of identified damage edge points is taken for damage contour estimation. As a result, both damage location and size can be evaluated, thereby providing the information for quantitative damage detection. The experiment consisting of five different sizes of damage is carried for method verification and the identified results show the efficiency of the proposed method. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Multiple cueing dissociates location- and feature-based repetition effects

    PubMed Central

    Hu, Kesong; Zhan, Junya; Li, Bingzhao; He, Shuchang; Samuel, Arthur G.

    2014-01-01

    There is an extensive literature on the phenomenon of inhibition of return (IOR): When attention is drawn to a peripheral location and then removed, response time is delayed if a target appears in the previously inspected location. Recent research suggests that non-spatial attribute repetition (i.e., if a target shares a feature like color with the earlier, cueing, stimulus) can have a similar inhibitory effect, at least when the target appears in the previously cued location. What remains unknown is whether location- and feature-based inhibitory effects can be dissociated. In the present study, we used a multiple cueing approach to investigate the properties of location- and feature-based repetition effects. In two experiments (detection, and discrimination), location-based IOR was absent but feature-based inhibition was consistently observed. Thus, the present results indicate that feature- and location-based inhibitory effects are dissociable. The results also provide support for the view that the attentional consequences of multiple cues reflect the overall center of gravity of the cues. We suggest that the repetition costs associated with feature and location repetition may be best understood as a consequence of the pattern of activation for object files associated with the stimuli present in the displays. PMID:24907677

  9. Tracking target objects orbiting earth using satellite-based telescopes

    DOEpatents

    De Vries, Willem H; Olivier, Scot S; Pertica, Alexander J

    2014-10-14

    A system for tracking objects that are in earth orbit via a constellation or network of satellites having imaging devices is provided. An object tracking system includes a ground controller and, for each satellite in the constellation, an onboard controller. The ground controller receives ephemeris information for a target object and directs that ephemeris information be transmitted to the satellites. Each onboard controller receives ephemeris information for a target object, collects images of the target object based on the expected location of the target object at an expected time, identifies actual locations of the target object from the collected images, and identifies a next expected location at a next expected time based on the identified actual locations of the target object. The onboard controller processes the collected image to identify the actual location of the target object and transmits the actual location information to the ground controller.

  10. Remote sensing and spatial analysis based study for detecting deforestation and the associated drivers

    NASA Astrophysics Data System (ADS)

    El-Abbas, Mustafa M.; Csaplovics, Elmar; Deafalla, Taisser H.

    2013-10-01

    Nowadays, remote-sensing technologies are becoming increasingly interlinked to the issue of deforestation. They offer a systematized and objective strategy to document, understand and simulate the deforestation process and its associated causes. In this context, the main goal of this study, conducted in the Blue Nile region of Sudan, in which most of the natural habitats were dramatically destroyed, was to develop spatial methodologies to assess the deforestation dynamics and its associated factors. To achieve that, optical multispectral satellite scenes (i.e., ASTER and LANDSAT) integrated with field survey in addition to multiple data sources were used for the analyses. Spatiotemporal Object Based Image Analysis (STOBIA) was applied to assess the change dynamics within the period of study. Broadly, the above mentioned analyses include; Object Based (OB) classifications, post-classification change detection, data fusion, information extraction and spatial analysis. Hierarchical multi-scale segmentation thresholds were applied and each class was delimited with semantic meanings by a set of rules associated with membership functions. Consequently, the fused multi-temporal data were introduced to create detailed objects of change classes from the input LU/LC classes. The dynamic changes were quantified and spatially located as well as the spatial and contextual relations from adjacent areas were analyzed. The main finding of the present study is that, the forest areas were drastically decreased, while the agrarian structure in conversion of forest into agricultural fields and grassland was the main force of deforestation. In contrast, the capability of the area to recover was clearly observed. The study concludes with a brief assessment of an 'oriented' framework, focused on the alarming areas where serious dynamics are located and where urgent plans and interventions are most critical, guided with potential solutions based on the identified driving forces.

  11. Detection and Localization of Subsurface Two-Dimensional Metallic Objects

    NASA Astrophysics Data System (ADS)

    Meschino, S.; Pajewski, L.; Schettini, G.

    2009-04-01

    "Roma Tre" University, Applied Electronics Dept.v. Vasca Navale 84, 00146 Rome, Italy Non-invasive identification of buried objects in the near-field of a receiver array is a subject of great interest, due to its application to the remote sensing of the earth's subsurface, to the detection of landmines, pipes, conduits, to the archaeological site characterization, and more. In this work, we present a Sub-Array Processing (SAP) approach for the detection and localization of subsurface perfectly-conducting circular cylinders. We consider a plane wave illuminating the region of interest, which is assumed to be a homogeneous, unlossy medium of unknown permittivity containing one or more targets. In a first step, we partition the receiver array so that the field scattered from the targets result to be locally plane at each sub-array. Then, we apply a Direction of Arrival (DOA) technique to obtain a set of angles for each locally plane wave, and triangulate these directions obtaining a collection of crossing crowding in the expected object locations [1]. We compare several DOA algorithms such as the traditional Bartlett and Capon Beamforming, the Pisarenko Harmonic Decomposition (PHD), the Minimum-Norm method, the Multiple Signal Classification (MUSIC) and the Estimation of Signal Parameters via Rotational Techinque (ESPRIT) [2]. In a second stage, we develop a statistical Poisson based model to manage the crossing pattern in order to extract the probable target's centre position. In particular, if the crossings are Poisson distributed, it is possible to feature two different distribution parameters [3]. These two parameters perform two density rate for the crossings, so that we can previously divide the crossing pattern in a certain number of equal-size windows and we can collect the windows of the crossing pattern with low rate parameters (that probably are background windows) and remove them. In this way we can consider only the high rate parameter windows (that most probably locate the target) and extract the center position of the object. We also consider some other localization-connected aspects. For example how to obtain a likely estimation of the soil permittivity and of the cylinders radius. Finally, when multiple objects are present, we refine our localization procedure by performing a Clustering Analysis of the crossing pattern. In particular, we apply the K-means algorithm to extract the coordinates of the objects centroids and the clusters extension. References [1] Şahin A., Miller L., "Object Detection Using High Resolution Near-Field Array Processing", IEEE Trans. on Geoscience and Remote Sensing, vol.39, no.1, Jan. 2001, pp. 136-141. [2] Gross F.B., "Smart Antennas for Wireless Communications", Mc.Graw-Hill 2005. [3] Hoaglin D.C., "A Poisonnes Plot", The American Statistician, vol.34, no.3 August 1980, pp.146-149.

  12. Differential effects of spaced vs. massed training in long-term object-identity and object-location recognition memory.

    PubMed

    Bello-Medina, Paola C; Sánchez-Carrasco, Livia; González-Ornelas, Nadia R; Jeffery, Kathryn J; Ramírez-Amaya, Víctor

    2013-08-01

    Here we tested whether the well-known superiority of spaced training over massed training is equally evident in both object identity and object location recognition memory. We trained animals with objects placed in a variable or in a fixed location to produce a location-independent object identity memory or a location-dependent object representation. The training consisted of 5 trials that occurred either on one day (Massed) or over the course of 5 consecutive days (Spaced). The memory test was done in independent groups of animals either 24h or 7 days after the last training trial. In each test the animals were exposed to either a novel object, when trained with the objects in variable locations, or to a familiar object in a novel location, when trained with objects in fixed locations. The difference in time spent exploring the changed versus the familiar objects was used as a measure of recognition memory. For the object-identity-trained animals, spaced training produced clear evidence of recognition memory after both 24h and 7 days, but massed-training animals showed it only after 24h. In contrast, for the object-location-trained animals, recognition memory was evident after both retention intervals and with both training procedures. When objects were placed in variable locations for the two types of training and the test was done with a brand-new location, only the spaced-training animals showed recognition at 24h, but surprisingly, after 7 days, animals trained using both procedures were able to recognize the change, suggesting a post-training consolidation process. We suggest that the two training procedures trigger different neural mechanisms that may differ in the two segregated streams that process object information and that may consolidate differently. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Design and development of electrical impedance tomography system with 32 electrodes and microcontroller

    NASA Astrophysics Data System (ADS)

    Ansory, Achmad; Prajitno, Prawito; Wijaya, Sastra Kusuma

    2018-02-01

    Electrical Impedance Tomography (EIT) is an imaging method that is able to estimate electrical impedance distribution inside an object. This EIT system is developed by using 32 electrodes and microcontroller based module. From a pair of electrodes, sinusoidal current of 3 mA is injected and the voltage differences between other pairs of electrodes are measured. Voltage measurement data are then sent to MATLAB and EIDORS software; the data are used to reconstruct two dimensions image. The system can detect and determine the position of a phantom in the tank. The object's position is accurately reconstructed and determined with the average shifting of 0.69 cm but object's area cannot be accurately reconstructed. The object's image is more accurately reconstructed when the object is located near to electrodes, has a larger size, and when the current injected to the system has a frequency of 100 kHz or 200kHz.

  14. SoundView: an auditory guidance system based on environment understanding for the visually impaired people.

    PubMed

    Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao

    2009-01-01

    Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.

  15. Object locating system

    DOEpatents

    Novak, J.L.; Petterson, B.

    1998-06-09

    A sensing system locates an object by sensing the object`s effect on electric fields. The object`s effect on the mutual capacitance of electrode pairs varies according to the distance between the object and the electrodes. A single electrode pair can sense the distance from the object to the electrodes. Multiple electrode pairs can more precisely locate the object in one or more dimensions. 12 figs.

  16. Detection of thoracic vascular structures by electrical impedance tomography: a systematic assessment of prominence peak analysis of impedance changes.

    PubMed

    Wodack, K H; Buehler, S; Nishimoto, S A; Graessler, M F; Behem, C R; Waldmann, A D; Mueller, B; Böhm, S H; Kaniusas, E; Thürk, F; Maerz, A; Trepte, C J C; Reuter, D A

    2018-02-28

    Electrical impedance tomography (EIT) is a non-invasive and radiation-free bedside monitoring technology, primarily used to monitor lung function. First experimental data shows that the descending aorta can be detected at different thoracic heights and might allow the assessment of central hemodynamics, i.e. stroke volume and pulse transit time. First, the feasibility of localizing small non-conductive objects within a saline phantom model was evaluated. Second, this result was utilized for the detection of the aorta by EIT in ten anesthetized pigs with comparison to thoracic computer tomography (CT). Two EIT belts were placed at different thoracic positions and a bolus of hypertonic saline (10 ml, 20%) was administered into the ascending aorta while EIT data were recorded. EIT images were reconstructed using the GREIT model, based on the individual's thoracic contours. The resulting EIT images were analyzed pixel by pixel to identify the aortic pixel, in which the bolus caused the highest transient impedance peak in time. In the phantom, small objects could be located at each position with a maximal deviation of 0.71 cm. In vivo, no significant differences between the aorta position measured by EIT and the anatomical aorta location were obtained for both measurement planes if the search was restricted to the dorsal thoracic region of interest (ROIs). It is possible to detect the descending aorta at different thoracic levels by EIT using an intra-aortic bolus of hypertonic saline. No significant differences in the position of the descending aorta on EIT images compared to CT images were obtained for both EIT belts.

  17. Identification of simple objects in image sequences

    NASA Astrophysics Data System (ADS)

    Geiselmann, Christoph; Hahn, Michael

    1994-08-01

    We present an investigation in the identification and location of simple objects in color image sequences. As an example the identification of traffic signs is discussed. Three aspects are of special interest. First regions have to be detected which may contain the object. The separation of those regions from the background can be based on color, motion, and contours. In the experiments all three possibilities are investigated. The second aspect focuses on the extraction of suitable features for the identification of the objects. For that purpose the border line of the region of interest is used. For planar objects a sufficient approximation of perspective projection is affine mapping. In consequence, it is near at hand to extract affine-invariant features from the border line. The investigation includes invariant features based on Fourier descriptors and moments. Finally, the object is identified by maximum likelihood classification. In the experiments all three basic object types are correctly identified. The probabilities for misclassification have been found to be below 1%

  18. Spatially Resolved Sensitivity of Single-Particle Plasmon Sensors

    PubMed Central

    2018-01-01

    The high sensitivity of localized surface plasmon resonance sensors to the local refractive index allows for the detection of single-molecule binding events. Though binding events of single objects can be detected by their induced plasmon shift, the broad distribution of observed shifts remains poorly understood. Here, we perform a single-particle study wherein single nanospheres bind to a gold nanorod, and relate the observed plasmon shift to the binding location using correlative microscopy. To achieve this we combine atomic force microscopy to determine the binding location, and single-particle spectroscopy to determine the corresponding plasmon shift. As expected, we find a larger plasmon shift for nanospheres binding at the tip of a rod compared to its sides, in good agreement with numerical calculations. However, we also find a broad distribution of shifts even for spheres that were bound at a similar location to the nanorod. Our correlative approach allows us to disentangle effects of nanoparticle dimensions and binding location, and by comparison to numerical calculations we find that the biggest contributor to this observed spread is the dispersion in nanosphere diameter. These experiments provide insight into the spatial sensitivity and signal-heterogeneity of single-particle plasmon sensors and provides a framework for signal interpretation in sensing applications. PMID:29520315

  19. Data Fusion for a Vision-Radiological System for Source Tracking and Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enqvist, Andreas; Koppal, Sanjeev

    2015-07-01

    A multidisciplinary approach to allow the tracking of the movement of radioactive sources by fusing data from multiple radiological and visual sensors is under development. The goal is to improve the ability to detect, locate, track and identify nuclear/radiological threats. The key concept is that such widely available visual and depth sensors can impact radiological detection, since the intensity fall-off in the count rate can be correlated to movement in three dimensions. To enable this, we pose an important question; what is the right combination of sensing modalities and vision algorithms that can best compliment a radiological sensor, for themore » purpose of detection and tracking of radioactive material? Similarly what is the best radiation detection methods and unfolding algorithms suited for data fusion with tracking data? Data fusion of multi-sensor data for radiation detection have seen some interesting developments lately. Significant examples include intelligent radiation sensor systems (IRSS), which are based on larger numbers of distributed similar or identical radiation sensors coupled with position data for network capable to detect and locate radiation source. Other developments are gamma-ray imaging systems based on Compton scatter in segmented detector arrays. Similar developments using coded apertures or scatter cameras for neutrons have recently occurred. The main limitation of such systems is not so much in their capability but rather in their complexity and cost which is prohibitive for large scale deployment. Presented here is a fusion system based on simple, low-cost computer vision and radiological sensors for tracking of multiple objects and identifying potential radiological materials being transported or shipped. The main focus of this work is the development on two separate calibration algorithms for characterizing the fused sensor system. The deviation from a simple inverse square-root fall-off of radiation intensity is explored and accounted for. In particular, the computer vision system enables a map of distance-dependence of the sources being tracked. Infrared, laser or stereoscopic vision sensors are all options for computer-vision implementation depending on interior vs exterior deployment, resolution desired and other factors. Similarly the radiation sensors will be focused on gamma-ray or neutron detection due to the long travel length and ability to penetrate even moderate shielding. There is a significant difference between the vision sensors and radiation sensors in the way the 'source' or signals are generated. A vision sensor needs an external light-source to illuminate the object and then detects the re-emitted illumination (or lack thereof). However, for a radiation detector, the radioactive material is the source itself. The only exception to this is the field of active interrogations where radiation is beamed into a material to entice new/additional radiation emission beyond what the material would emit spontaneously. The aspect of the nuclear material being the source itself means that all other objects in the environment are 'illuminated' or irradiated by the source. Most radiation will readily penetrate regular material, scatter in new directions or be absorbed. Thus if a radiation source is located near a larger object that object will in turn scatter some radiation that was initially emitted in a direction other than the direction of the radiation detector, this can add to the count rate that is observed. The effect of these scatter is a deviation from the traditional distance dependence of the radiation signal and is a key challenge that needs a combined system calibration solution and algorithms. Thus both an algebraic approach as well as a statistical approach have been developed and independently evaluated to investigate the sensitivity to this deviation from the simplified radiation fall-off as a function of distance. The resulting calibrated system algorithms are used and demonstrated in various laboratory scenarios, and later in realistic tracking scenarios. The selection and testing of radiological and computer-vision sensors for the additional specific scenarios will be the subject of ongoing and future work. (authors)« less

  20. A Virtual Bioinformatics Knowledge Environment for Early Cancer Detection

    NASA Technical Reports Server (NTRS)

    Crichton, Daniel; Srivastava, Sudhir; Johnsey, Donald

    2003-01-01

    Discovery of disease biomarkers for cancer is a leading focus of early detection. The National Cancer Institute created a network of collaborating institutions focused on the discovery and validation of cancer biomarkers called the Early Detection Research Network (EDRN). Informatics plays a key role in enabling a virtual knowledge environment that provides scientists real time access to distributed data sets located at research institutions across the nation. The distributed and heterogeneous nature of the collaboration makes data sharing across institutions very difficult. EDRN has developed a comprehensive informatics effort focused on developing a national infrastructure enabling seamless access, sharing and discovery of science data resources across all EDRN sites. This paper will discuss the EDRN knowledge system architecture, its objectives and its accomplishments.

  1. Image Retrieval Method for Multiscale Objects from Optical Colonoscopy Images

    PubMed Central

    Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro; Aoki, Hiroshi; Takeuchi, Ken; Suzuki, Yasuo

    2017-01-01

    Optical colonoscopy is the most common approach to diagnosing bowel diseases through direct colon and rectum inspections. Periodic optical colonoscopy examinations are particularly important for detecting cancers at early stages while still treatable. However, diagnostic accuracy is highly dependent on both the experience and knowledge of the medical doctor. Moreover, it is extremely difficult, even for specialist doctors, to detect the early stages of cancer when obscured by inflammations of the colonic mucosa due to intractable inflammatory bowel diseases, such as ulcerative colitis. Thus, to assist the UC diagnosis, it is necessary to develop a new technology that can retrieve similar cases of diagnostic target image from cases in the past that stored the diagnosed images with various symptoms of colonic mucosa. In order to assist diagnoses with optical colonoscopy, this paper proposes a retrieval method for colonoscopy images that can cope with multiscale objects. The proposed method can retrieve similar colonoscopy images despite varying visible sizes of the target objects. Through three experiments conducted with real clinical colonoscopy images, we demonstrate that the method is able to retrieve objects of any visible size and any location at a high level of accuracy. PMID:28255295

  2. Image Retrieval Method for Multiscale Objects from Optical Colonoscopy Images.

    PubMed

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro; Aoki, Hiroshi; Takeuchi, Ken; Suzuki, Yasuo

    2017-01-01

    Optical colonoscopy is the most common approach to diagnosing bowel diseases through direct colon and rectum inspections. Periodic optical colonoscopy examinations are particularly important for detecting cancers at early stages while still treatable. However, diagnostic accuracy is highly dependent on both the experience and knowledge of the medical doctor. Moreover, it is extremely difficult, even for specialist doctors, to detect the early stages of cancer when obscured by inflammations of the colonic mucosa due to intractable inflammatory bowel diseases, such as ulcerative colitis. Thus, to assist the UC diagnosis, it is necessary to develop a new technology that can retrieve similar cases of diagnostic target image from cases in the past that stored the diagnosed images with various symptoms of colonic mucosa. In order to assist diagnoses with optical colonoscopy, this paper proposes a retrieval method for colonoscopy images that can cope with multiscale objects. The proposed method can retrieve similar colonoscopy images despite varying visible sizes of the target objects. Through three experiments conducted with real clinical colonoscopy images, we demonstrate that the method is able to retrieve objects of any visible size and any location at a high level of accuracy.

  3. Implementation of an algorithm for cylindrical object identification using range data

    NASA Technical Reports Server (NTRS)

    Bozeman, Sylvia T.; Martin, Benjamin J.

    1989-01-01

    One of the problems in 3-D object identification and localization is addressed. In robotic and navigation applications the vision system must be able to distinguish cylindrical or spherical objects as well as those of other geometric shapes. An algorithm was developed to identify cylindrical objects in an image when range data is used. The algorithm incorporates the Hough transform for line detection using edge points which emerge from a Sobel mask. Slices of the data are examined to locate arcs of circles using the normal equations of an over-determined linear system. Current efforts are devoted to testing the computer implementation of the algorithm. Refinements are expected to continue in order to accommodate cylinders in various positions. A technique is sought which is robust in the presence of noise and partial occlusions.

  4. Wireless LAN security management with location detection capability in hospitals.

    PubMed

    Tanaka, K; Atarashi, H; Yamaguchi, I; Watanabe, H; Yamamoto, R; Ohe, K

    2012-01-01

    In medical institutions, unauthorized access points and terminals obstruct the stable operation of a large-scale wireless local area network (LAN) system. By establishing a real-time monitoring method to detect such unauthorized wireless devices, we can improve the efficiency of security management. We detected unauthorized wireless devices by using a centralized wireless LAN system and a location detection system at 370 access points at the University of Tokyo Hospital. By storing the detected radio signal strength and location information in a database, we evaluated the risk level from the detection history. We also evaluated the location detection performance in our hospital ward using Wi-Fi tags. The presence of electric waves outside the hospital and those emitted from portable game machines with wireless communication capability was confirmed from the detection result. The location detection performance showed an error margin of approximately 4 m in detection accuracy and approximately 5% in false detection. Therefore, it was effective to consider the radio signal strength as both an index of likelihood at the detection location and an index for the level of risk. We determined the location of wireless devices with high accuracy by filtering the detection results on the basis of radio signal strength and detection history. Results of this study showed that it would be effective to use the developed location database containing radio signal strength and detection history for security management of wireless LAN systems and more general-purpose location detection applications.

  5. Object locating system

    DOEpatents

    Novak, James L.; Petterson, Ben

    1998-06-09

    A sensing system locates an object by sensing the object's effect on electric fields. The object's effect on the mutual capacitance of electrode pairs varies according to the distance between the object and the electrodes. A single electrode pair can sense the distance from the object to the electrodes. Multiple electrode pairs can more precisely locate the object in one or more dimensions.

  6. Attention modulates perception of visual space

    PubMed Central

    Zhou, Liu; Deng, Chenglong; Ooi, Teng Leng; He, Zijiang J.

    2017-01-01

    Attention readily facilitates the detection and discrimination of objects, but it is not known whether it helps to form the vast volume of visual space that contains the objects and where actions are implemented. Conventional wisdom suggests not, given the effortless ease with which we perceive three-dimensional (3D) scenes on opening our eyes. Here, we show evidence to the contrary. In Experiment 1, the observer judged the location of a briefly presented target, placed either on the textured ground or ceiling surface. Judged location was more accurate for a target on the ground, provided that the ground was visible and that the observer directed attention to the lower visual field, not the upper field. This reveals that attention facilitates space perception with reference to the ground. Experiment 2 showed that judged location of a target in mid-air, with both ground and ceiling surfaces present, was more accurate when the observer directed their attention to the lower visual field; this indicates that the attention effect extends to visual space above the ground. These findings underscore the role of attention in anchoring visual orientation in space, which is arguably a primal event that enhances one’s ability to interact with objects and surface layouts within the visual space. The fact that the effect of attention was contingent on the ground being visible suggests that our terrestrial visual system is best served by its ecological niche. PMID:29177198

  7. Detecting continuity violations in infancy: a new account and new evidence from covering and tube events.

    PubMed

    Wang, Su-hua; Baillargeon, Renée; Paterson, Sarah

    2005-03-01

    Recent research on infants' responses to occlusion and containment events indicates that, although some violations of the continuity principle are detected at an early age e.g. Aguiar, A., & Baillargeon, R. (1999). 2.5-month-old infants' reasoning about when objects should and should not be occluded. Cognitive Psychology 39, 116-157; Hespos, S. J., & Baillargeon, R. (2001). Knowledge about containment events in very young infants. Cognition 78, 207-245; Luo, Y., & Baillargeon, R. (in press). When the ordinary seems unexpected: Evidence for rule-based reasoning in young infants. Cognition; Wilcox, T., Nadel, L., & Rosser, R. (1996). Location memory in healthy preterm and full-term infants. Infant Behavior & Development 19, 309-323, others are not detected until much later e.g. Baillargeon, R., & DeVos, J. (1991). Object permanence in young infants: Further evidence. Child Development 62, 1227-1246; Hespos, S. J., & Baillargeon, R. (2001). Infants' knowledge about occlusion and containment events: A surprising discrepancy. Psychological Science 12, 140-147; Luo, Y., & Baillargeon, R. (2004). Infants' reasoning about events involving transparent occluders and containers. Manuscript in preparation; Wilcox, T. (1999). Object individuation: Infants' use of shape, size, pattern, and color. Cognition 72, 125-166. The present research focused on events involving covers or tubes, and brought to light additional examples of early and late successes in infants' ability to detect continuity violations. In Experiment 1, 2.5- to 3-month-old infants were surprised (1) when a cover was lowered over an object, slid to the right, and lifted to reveal no object; and (2) when a cover was lowered over an object, slid behind the left half of a screen, lifted above the screen, moved to the right, lowered behind the right half of the screen, slid past the screen, and finally lifted to reveal the object. In Experiments 2 and 3, 9- and 11-month-old infants were not surprised when a short cover was lowered over a tall object until it became fully hidden; only 12-month-old infants detected this violation. Finally, in Experiment 4, 9-, 12-, and 13-month-old infants were not surprised when a tall object was lowered inside a short tube until it became fully hidden; only 14-month-old infants detected this violation. A new account of infants' physical reasoning attempts to make sense of all of these results. New research directions suggested by the account are also discussed.

  8. Objective assessment of the aesthetic outcomes of breast cancer treatment: toward automatic localization of fiducial points on digital photographs

    NASA Astrophysics Data System (ADS)

    Udpa, Nitin; Sampat, Mehul P.; Kim, Min Soon; Reece, Gregory P.; Markey, Mia K.

    2007-03-01

    The contemporary goals of breast cancer treatment are not limited to cure but include maximizing quality of life. All breast cancer treatment can adversely affect breast appearance. Developing objective, quantifiable methods to assess breast appearance is important to understand the impact of deformity on patient quality of life, guide selection of current treatments, and make rational treatment advances. A few measures of aesthetic properties such as symmetry have been developed. They are computed from the distances between manually identified fiducial points on digital photographs. However, this is time-consuming and subject to intra- and inter-observer variability. The purpose of this study is to investigate methods for automatic localization of fiducial points on anterior-posterior digital photographs taken to document the outcomes of breast reconstruction. Particular emphasis is placed on automatic localization of the nipple complex since the most widely used aesthetic measure, the Breast Retraction Assessment, quantifies the symmetry of nipple locations. The nipple complexes are automatically localized using normalized cross-correlation with a template bank of variants of Gaussian and Laplacian of Gaussian filters. A probability map of likely nipple locations determined from the image database is used to reduce the number of false positive detections from the matched filter operation. The accuracy of the nipple detection was evaluated relative to markings made by three human observers. The impact of using the fiducial point locations as identified by the automatic method, as opposed to the manual method, on the calculation of the Breast Retraction Assessment was also evaluated.

  9. Hybrid optimization and Bayesian inference techniques for a non-smooth radiation detection problem

    DOE PAGES

    Stefanescu, Razvan; Schmidt, Kathleen; Hite, Jason; ...

    2016-12-12

    In this paper, we propose several algorithms to recover the location and intensity of a radiation source located in a simulated 250 × 180 m block of an urban center based on synthetic measurements. Radioactive decay and detection are Poisson random processes, so we employ likelihood functions based on this distribution. Owing to the domain geometry and the proposed response model, the negative logarithm of the likelihood is only piecewise continuous differentiable, and it has multiple local minima. To address these difficulties, we investigate three hybrid algorithms composed of mixed optimization techniques. For global optimization, we consider simulated annealing, particlemore » swarm, and genetic algorithm, which rely solely on objective function evaluations; that is, they do not evaluate the gradient in the objective function. By employing early stopping criteria for the global optimization methods, a pseudo-optimum point is obtained. This is subsequently utilized as the initial value by the deterministic implicit filtering method, which is able to find local extrema in non-smooth functions, to finish the search in a narrow domain. These new hybrid techniques, combining global optimization and implicit filtering address, difficulties associated with the non-smooth response, and their performances, are shown to significantly decrease the computational time over the global optimization methods. To quantify uncertainties associated with the source location and intensity, we employ the delayed rejection adaptive Metropolis and DiffeRential Evolution Adaptive Metropolis algorithms. Finally, marginal densities of the source properties are obtained, and the means of the chains compare accurately with the estimates produced by the hybrid algorithms.« less

  10. Design and Deployment of a Pediatric Cardiac Arrest Surveillance System

    PubMed Central

    Newton, Heather Marie; McNamara, Leann; Engorn, Branden Michael; Jones, Kareen; Bernier, Meghan; Dodge, Pamela; Salamone, Cheryl; Bhalala, Utpal; Jeffers, Justin M.; Engineer, Lilly; Diener-West, Marie; Hunt, Elizabeth Anne

    2018-01-01

    Objective We aimed to increase detection of pediatric cardiopulmonary resuscitation (CPR) events and collection of physiologic and performance data for use in quality improvement (QI) efforts. Materials and Methods We developed a workflow-driven surveillance system that leveraged organizational information technology systems to trigger CPR detection and analysis processes. We characterized detection by notification source, type, location, and year, and compared it to previous methods of detection. Results From 1/1/2013 through 12/31/2015, there were 2,986 unique notifications associated with 2,145 events, 317 requiring CPR. PICU and PEDS-ED accounted for 65% of CPR events, whereas floor care areas were responsible for only 3% of events. 100% of PEDS-OR and >70% of PICU CPR events would not have been included in QI efforts. Performance data from both defibrillator and bedside monitor increased annually. (2013: 1%; 2014: 18%; 2015: 27%). Discussion After deployment of this system, detection has increased ∼9-fold and performance data collection increased annually. Had the system not been deployed, 100% of PEDS-OR and 50–70% of PICU, NICU, and PEDS-ED events would have been missed. Conclusion By leveraging hospital information technology and medical device data, identification of pediatric cardiac arrest with an associated increased capture in the proportion of objective performance data is possible. PMID:29854451

  11. Deep Neural Network for Structural Prediction and Lane Detection in Traffic Scene.

    PubMed

    Li, Jun; Mei, Xue; Prokhorov, Danil; Tao, Dacheng

    2017-03-01

    Hierarchical neural networks have been shown to be effective in learning representative image features and recognizing object classes. However, most existing networks combine the low/middle level cues for classification without accounting for any spatial structures. For applications such as understanding a scene, how the visual cues are spatially distributed in an image becomes essential for successful analysis. This paper extends the framework of deep neural networks by accounting for the structural cues in the visual signals. In particular, two kinds of neural networks have been proposed. First, we develop a multitask deep convolutional network, which simultaneously detects the presence of the target and the geometric attributes (location and orientation) of the target with respect to the region of interest. Second, a recurrent neuron layer is adopted for structured visual detection. The recurrent neurons can deal with the spatial distribution of visible cues belonging to an object whose shape or structure is difficult to explicitly define. Both the networks are demonstrated by the practical task of detecting lane boundaries in traffic scenes. The multitask convolutional neural network provides auxiliary geometric information to help the subsequent modeling of the given lane structures. The recurrent neural network automatically detects lane boundaries, including those areas containing no marks, without any explicit prior knowledge or secondary modeling.

  12. Planck 2015 results. XXVI. The Second Planck Catalogue of Compact Sources

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Argüeso, F.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaner, E.; Beichman, C.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Böhringer, H.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Boulanger, F.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Carvalho, P.; Catalano, A.; Challinor, A.; Chamballu, A.; Chary, R.-R.; Chiang, H. C.; Christensen, P. R.; Clemens, M.; Clements, D. L.; Colombi, S.; Colombo, L. P. L.; Combet, C.; Couchot, F.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Falgarone, E.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Helou, G.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; León-Tavares, J.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Negrello, M.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Noviello, F.; Novikov, D.; Novikov, I.; Oxborrow, C. A.; Paci, F.; Pagano, L.; Pajot, F.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reach, W. T.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rowan-Robinson, M.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Sanghera, H. S.; Santos, D.; Savelainen, M.; Savini, G.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Sudiwala, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tornikoski, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Walter, B.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    The Second Planck Catalogue of Compact Sources is a list of discrete objects detected in single-frequency maps from the full duration of the Planck mission and supersedes previous versions. It consists of compact sources, both Galactic and extragalactic, detected over the entire sky. Compact sources detected in the lower frequency channels are assigned to the PCCS2, while at higher frequencies they are assigned to one of two subcatalogues, the PCCS2 or PCCS2E, depending on their location on the sky. The first of these (PCCS2) covers most of the sky and allows the user to produce subsamples at higher reliabilities than the target 80% integral reliability of the catalogue. The second (PCCS2E) contains sources detected in sky regions where the diffuse emission makes it difficult to quantify the reliability of the detections. Both the PCCS2 and PCCS2E include polarization measurements, in the form of polarized flux densities, or upper limits, and orientation angles for all seven polarization-sensitive Planck channels. The improved data-processing of the full-mission maps and their reduced noise levels allow us to increase the number of objects in the catalogue, improving its completeness for the target 80% reliability as compared with the previous versions, the PCCS and the Early Release Compact Source Catalogue (ERCSC).

  13. Associative Symmetry versus Independent Associations in the Memory for Object-Location Associations

    ERIC Educational Resources Information Center

    Sommer, Tobias; Rose, Michael; Buchel, Christian

    2007-01-01

    The formation of associations between objects and locations is a vital aspect of episodic memory. More specifically, remembering the location where one experienced an object and, vice versa, the object one encountered at a specific location are both important elements for the memory of an event. Whether episodic associations are holistic…

  14. On the Detectability of Planet X with LSST

    NASA Astrophysics Data System (ADS)

    Trilling, David E.; Bellm, Eric C.; Malhotra, Renu

    2018-06-01

    Two planetary mass objects in the far outer solar system—collectively referred to here as Planet X— have recently been hypothesized to explain the orbital distribution of distant Kuiper Belt Objects. Neither planet is thought to be exceptionally faint, but the sky locations of these putative planets are poorly constrained. Therefore, a wide area survey is needed to detect these possible planets. The Large Synoptic Survey Telescope (LSST) will carry out an unbiased, large area (around 18000 deg2), deep (limiting magnitude of individual frames of 24.5) survey (the “wide-fast-deep (WFD)” survey) of the southern sky beginning in 2022, and it will therefore be an important tool in searching for these hypothesized planets. Here, we explore the effectiveness of LSST as a search platform for these possible planets. Assuming the current baseline cadence (which includes the WFD survey plus additional coverage), we estimate that LSST will confidently detect or rule out the existence of Planet X in 61% of the entire sky. At orbital distances up to ∼75 au, Planet X could simply be found in the normal nightly moving object processing; at larger distances, it will require custom data processing. We also discuss the implications of a nondetection of Planet X in LSST data.

  15. Modeling global scene factors in attention

    NASA Astrophysics Data System (ADS)

    Torralba, Antonio

    2003-07-01

    Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America

  16. Searching for Variables in one of the WHAT Fields

    NASA Astrophysics Data System (ADS)

    Shporer, A.; Mazeh, T.; Moran, A.; Bakos, G.; Kovacs, G.

    2007-07-01

    We present preliminary results on a single field observed by WHAT, a small-aperture short focal length automated telescope with an 8.2° × 8.2° deg field of view, located at the Wise Observatory. The system is similar to the members of HATNet (http://cfa-www.harvard.edu/~gbakos/HAT/) and is aimed at searching for transiting extrasolar planets and variable objects. With 5 min integration time, the telescope achieved a precision of a few mmag for the brightest objects. We detect variables with amplitudes less than 0.01 mag. All 152 periodic variables are presented at http://wise-obs.tau.ac.il/~amit/236/.

  17. Contact sensing from force measurements

    NASA Technical Reports Server (NTRS)

    Bicchi, Antonio; Salisbury, J. K.; Brock, David L.

    1993-01-01

    This article addresses contact sensing (i.e., the problem of resolving the location of a contact, the force at the interface, and the moment about the contact normals). Called 'intrinsic' contact sensing for the use of internal force and torque measurements, this method allows for practical devices that provide simple, relevant contact information in practical robotic applications. Such sensors have been used in conjunction with robot hands to identify objects, determine surface friction, detect slip, augment grasp stability, measure object mass, probe surfaces, and control collision and for a variety of other useful tasks. This article describes the theoretical basis for their operation and provides a framework for future device design.

  18. Role of Vision and Mechanoreception in Bed Bug, Cimex lectularius L. Behavior

    PubMed Central

    Singh, Narinderpal; Wang, Changlu; Cooper, Richard

    2015-01-01

    The role of olfactory cues such as carbon dioxide, pheromones, and kairomones in bed bug, Cimex lectularius L. behavior has been demonstrated. However, the role of vision and mechanoreception in bed bug behavior is poorly understood. We investigated bed bug vision by determining their responses to different colors, vertical objects, and their ability to detect colors and vertical objects under low and complete dark conditions. Results show black and red paper harborages are preferred compared to yellow, green, blue, and white harborages. A bed bug trapping device with a black or red exterior surface was significantly more attractive to bed bugs than that with a white exterior surface. Bed bugs exhibited strong orientation behavior toward vertical objects. The height (15 vs. 30 cm tall) and color (brown vs. black) of the vertical object had no significant effect on orientation behavior of bed bugs. Bed bugs could differentiate color and detect vertical objects at very low background light conditions, but not in complete darkness. Bed bug preference to different substrate textures (mechanoreception) was also explored. Bed bugs preferred dyed tape compared to painted tape, textured painted plastic, and felt. These results revealed that substrate color, presence of vertical objects, and substrate texture affect host-seeking and harborage-searching behavior of bed bugs. Bed bugs may use a combination of vision, mechanoreception, and chemoreception to locate hosts and seek harborages. PMID:25748041

  19. Role of vision and mechanoreception in bed bug, Cimex lectularius L. behavior.

    PubMed

    Singh, Narinderpal; Wang, Changlu; Cooper, Richard

    2015-01-01

    The role of olfactory cues such as carbon dioxide, pheromones, and kairomones in bed bug, Cimex lectularius L. behavior has been demonstrated. However, the role of vision and mechanoreception in bed bug behavior is poorly understood. We investigated bed bug vision by determining their responses to different colors, vertical objects, and their ability to detect colors and vertical objects under low and complete dark conditions. Results show black and red paper harborages are preferred compared to yellow, green, blue, and white harborages. A bed bug trapping device with a black or red exterior surface was significantly more attractive to bed bugs than that with a white exterior surface. Bed bugs exhibited strong orientation behavior toward vertical objects. The height (15 vs. 30 cm tall) and color (brown vs. black) of the vertical object had no significant effect on orientation behavior of bed bugs. Bed bugs could differentiate color and detect vertical objects at very low background light conditions, but not in complete darkness. Bed bug preference to different substrate textures (mechanoreception) was also explored. Bed bugs preferred dyed tape compared to painted tape, textured painted plastic, and felt. These results revealed that substrate color, presence of vertical objects, and substrate texture affect host-seeking and harborage-searching behavior of bed bugs. Bed bugs may use a combination of vision, mechanoreception, and chemoreception to locate hosts and seek harborages.

  20. How to Decide? Multi-Objective Early-Warning Monitoring Networks for Water Suppliers

    NASA Astrophysics Data System (ADS)

    Bode, Felix; Loschko, Matthias; Nowak, Wolfgang

    2015-04-01

    Groundwater is a resource for drinking water and hence needs to be protected from contaminations. However, many well catchments include an inventory of known and unknown risk sources, which cannot be eliminated, especially in urban regions. As a matter of risk control, all these risk sources should be monitored. A one-to-one monitoring situation for each risk source would lead to a cost explosion and is even impossible for unknown risk sources. However, smart optimization concepts could help to find promising low-cost monitoring network designs. In this work we develop a concept to plan monitoring networks using multi-objective optimization. Our considered objectives are to maximize the probability of detecting all contaminations, to enhance the early warning time before detected contaminations reach the drinking water well, and to minimize the installation and operating costs of the monitoring network. Using multi-objectives optimization, we avoid the problem of having to weight these objectives to a single objective-function. These objectives are clearly competing, and it is impossible to know their mutual trade-offs beforehand - each catchment differs in many points and it is hardly possible to transfer knowledge between geological formations and risk inventories. To make our optimization results more specific to the type of risk inventory in different catchments we do risk prioritization of all known risk sources. Due to the lack of the required data, quantitative risk ranking is impossible. Instead, we use a qualitative risk ranking to prioritize the known risk sources for monitoring. Additionally, we allow for the existence of unknown risk sources that are totally uncertain in location and in their inherent risk. Therefore, they can neither be located nor ranked. Instead, we represent them by a virtual line of risk sources surrounding the production well. We classify risk sources into four different categories: severe, medium and tolerable for known risk sources and an extra category for the unknown ones. With that, early warning time and detection probability become individual objectives for each risk class. Thus, decision makers can identify monitoring networks valid for controlling the top risk sources, and evaluate the capabilities (or search for least-cost upgrades) to also cover moderate, tolerable and unknown risk sources. Monitoring networks, which are valid for the remaining risk also cover all other risk sources, but only with a relatively poor early-warning time. The data provided for the optimization algorithm are calculated in a preprocessing step by a flow and transport model. It simulates, which potential contaminant plumes from the risk sources would be detectable where and when by all possible candidate positions for monitoring wells. Uncertainties due to hydro(geo)logical phenomena are taken into account by Monte-Carlo simulations. These include uncertainty in ambient flow direction of the groundwater, uncertainty of the conductivity field, and different scenarios for the pumping rates of the production wells. To avoid numerical dispersion during the transport simulations, we use particle-tracking random walk methods when simulating transport.

  1. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligence Revolutionizing Wildlife Monitoring and Conservation

    PubMed Central

    Gonzalez, Luis F.; Montes, Glen A.; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J.

    2016-01-01

    Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification. PMID:26784196

  2. Tactical decisions for changeable cuttlefish camouflage: visual cues for choosing masquerade are relevant from a greater distance than visual cues used for background matching.

    PubMed

    Buresch, Kendra C; Ulmer, Kimberly M; Cramer, Corinne; McAnulty, Sarah; Davison, William; Mäthger, Lydia M; Hanlon, Roger T

    2015-10-01

    Cuttlefish use multiple camouflage tactics to evade their predators. Two common tactics are background matching (resembling the background to hinder detection) and masquerade (resembling an uninteresting or inanimate object to impede detection or recognition). We investigated how the distance and orientation of visual stimuli affected the choice of these two camouflage tactics. In the current experiments, cuttlefish were presented with three visual cues: 2D horizontal floor, 2D vertical wall, and 3D object. Each was placed at several distances: directly beneath (in a circle whose diameter was one body length (BL); at zero BL [(0BL); i.e., directly beside, but not beneath the cuttlefish]; at 1BL; and at 2BL. Cuttlefish continued to respond to 3D visual cues from a greater distance than to a horizontal or vertical stimulus. It appears that background matching is chosen when visual cues are relevant only in the immediate benthic surroundings. However, for masquerade, objects located multiple body lengths away remained relevant for choice of camouflage. © 2015 Marine Biological Laboratory.

  3. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligence Revolutionizing Wildlife Monitoring and Conservation.

    PubMed

    Gonzalez, Luis F; Montes, Glen A; Puig, Eduard; Johnson, Sandra; Mengersen, Kerrie; Gaston, Kevin J

    2016-01-14

    Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification.

  4. Homodyne impulse radar hidden object locator

    DOEpatents

    McEwan, T.E.

    1996-04-30

    An electromagnetic detector is designed to locate an object hidden behind a separator or a cavity within a solid object. The detector includes a PRF generator for generating 2 MHz pulses, a homodyne oscillator for generating a 2 kHz square wave, and for modulating the pulses from the PRF generator. A transmit antenna transmits the modulated pulses through the separator, and a receive antenna receives the signals reflected off the object. The receiver path of the detector includes a sample and hold circuit, an AC coupled amplifier which filters out DC bias level shifts in the sample and hold circuit, and a rectifier circuit connected to the homodyne oscillator and to the AC coupled amplifier, for synchronously rectifying the modulated pulses transmitted over the transmit antenna. The homodyne oscillator modulates the signal from the PRF generator with a continuous wave (CW) signal, and the AC coupled amplifier operates with a passband centered on that CW signal. The present detector can be used in several applications, including the detection of metallic and non-metallic objects, such as pipes, studs, joists, nails, rebars, conduits and electrical wiring, behind wood wall, ceiling, plywood, particle board, dense hardwood, masonry and cement structure. The detector is portable, light weight, simple to use, inexpensive, and has a low power emission which facilitates the compliance with Part 15 of the FCC rules. 15 figs.

  5. Homodyne impulse radar hidden object locator

    DOEpatents

    McEwan, Thomas E.

    1996-01-01

    An electromagnetic detector is designed to locate an object hidden behind a separator or a cavity within a solid object. The detector includes a PRF generator for generating 2 MHz pulses, a homodyne oscillator for generating a 2 kHz square wave, and for modulating the pulses from the PRF generator. A transmit antenna transmits the modulated pulses through the separator, and a receive antenna receives the signals reflected off the object. The receiver path of the detector includes a sample and hold circuit, an AC coupled amplifier which filters out DC bias level shifts in the sample and hold circuit, and a rectifier circuit connected to the homodyne oscillator and to the AC coupled amplifier, for synchronously rectifying the modulated pulses transmitted over the transmit antenna. The homodyne oscillator modulates the signal from the PRF generator with a continuous wave (CW) signal, and the AC coupled amplifier operates with a passband centered on that CW signal. The present detector can be used in several applications, including the detection of metallic and non-metallic objects, such as pipes, studs, joists, nails, rebars, conduits and electrical wiring, behind wood wall, ceiling, plywood, particle board, dense hardwood, masonry and cement structure. The detector is portable, light weight, simple to use, inexpensive, and has a low power emission which facilitates the compliance with Part 15 of the FCC rules.

  6. New Hypervelocity Terminal Intercept Guidance Systems for Deflecting/Disrupting Hazardous Asteroids

    NASA Astrophysics Data System (ADS)

    Lyzhoft, Joshua Richard

    Computational modeling and simulations of visual and infrared (IR) sensors are investigated for a new hypervelocity terminal guidance system of intercepting small asteroids (50 to 150 meters in diameter). Computational software tools for signal-to-noise ratio estimation of visual and IR sensors, estimation of minimum and maximum ranges of target detection, and GPU (Graphics Processing Units)-accelerated simulations of the IR-based terminal intercept guidance systems are developed. Scaled polyhedron models of known objects, such as the Rosetta mission's Comet 67P/C-G, NASA's OSIRIS-REx Bennu, and asteroid 433 Eros, are utilized in developing a GPU-based simulation tool for the IR-based terminal intercept guidance systems. A parallelized-ray tracing algorithm for simulating realistic surface-to-surface shadowing of irregular-shaped asteroids or comets is developed. Polyhedron solid-angle approximation is also considered. Using these computational models, digital image processing is investigated to determine single or multiple impact locations to assess the technical feasibility of new planetary defense mission concepts of utilizing a Hypervelocity Asteroid Intercept Vehicle (HAIV) or a Multiple Kinetic-energy Interceptor Vehicle (MKIV). Study results indicate that the IR-based guidance system outperforms the visual-based system in asteroid detection and tracking. When using an IR sensor, predicting impact locations from filtered images resulted in less jittery spacecraft control accelerations than conducting missions with a visual sensor. Infrared sensors have also the possibility to detect asteroids at greater distances, and if properly used, can aid in terminal phase guidance for proper impact location determination for the MKIV system. Emerging new topics of the Minimum Orbit Intersection Distance (MOID) estimation and the Full-Two-Body Problem (F2BP) formulation are also investigated to assess a potential near-Earth object collision risk and the proximity gravity effects of an irregular-shaped binary-asteroid target on a standoff nuclear explosion mission.

  7. Feasibility Study on the Use of On-line Multivariate Statistical Process Control for Safeguards Applications in Natural Uranium Conversion Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ladd-Lively, Jennifer L

    2014-01-01

    The objective of this work was to determine the feasibility of using on-line multivariate statistical process control (MSPC) for safeguards applications in natural uranium conversion plants. Multivariate statistical process control is commonly used throughout industry for the detection of faults. For safeguards applications in uranium conversion plants, faults could include the diversion of intermediate products such as uranium dioxide, uranium tetrafluoride, and uranium hexafluoride. This study was limited to a 100 metric ton of uranium (MTU) per year natural uranium conversion plant (NUCP) using the wet solvent extraction method for the purification of uranium ore concentrate. A key component inmore » the multivariate statistical methodology is the Principal Component Analysis (PCA) approach for the analysis of data, development of the base case model, and evaluation of future operations. The PCA approach was implemented through the use of singular value decomposition of the data matrix where the data matrix represents normal operation of the plant. Component mole balances were used to model each of the process units in the NUCP. However, this approach could be applied to any data set. The monitoring framework developed in this research could be used to determine whether or not a diversion of material has occurred at an NUCP as part of an International Atomic Energy Agency (IAEA) safeguards system. This approach can be used to identify the key monitoring locations, as well as locations where monitoring is unimportant. Detection limits at the key monitoring locations can also be established using this technique. Several faulty scenarios were developed to test the monitoring framework after the base case or normal operating conditions of the PCA model were established. In all of the scenarios, the monitoring framework was able to detect the fault. Overall this study was successful at meeting the stated objective.« less

  8. Accuracy for detection of simulated lesions: comparison of fluid-attenuated inversion-recovery, proton density--weighted, and T2-weighted synthetic brain MR imaging

    NASA Technical Reports Server (NTRS)

    Herskovits, E. H.; Itoh, R.; Melhem, E. R.

    2001-01-01

    OBJECTIVE: The objective of our study was to determine the effects of MR sequence (fluid-attenuated inversion-recovery [FLAIR], proton density--weighted, and T2-weighted) and of lesion location on sensitivity and specificity of lesion detection. MATERIALS AND METHODS: We generated FLAIR, proton density-weighted, and T2-weighted brain images with 3-mm lesions using published parameters for acute multiple sclerosis plaques. Each image contained from zero to five lesions that were distributed among cortical-subcortical, periventricular, and deep white matter regions; on either side; and anterior or posterior in position. We presented images of 540 lesions, distributed among 2592 image regions, to six neuroradiologists. We constructed a contingency table for image regions with lesions and another for image regions without lesions (normal). Each table included the following: the reviewer's number (1--6); the MR sequence; the side, position, and region of the lesion; and the reviewer's response (lesion present or absent [normal]). We performed chi-square and log-linear analyses. RESULTS: The FLAIR sequence yielded the highest true-positive rates (p < 0.001) and the highest true-negative rates (p < 0.001). Regions also differed in reviewers' true-positive rates (p < 0.001) and true-negative rates (p = 0.002). The true-positive rate model generated by log-linear analysis contained an additional sequence-location interaction. The true-negative rate model generated by log-linear analysis confirmed these associations, but no higher order interactions were added. CONCLUSION: We developed software with which we can generate brain images of a wide range of pulse sequences and that allows us to specify the location, size, shape, and intrinsic characteristics of simulated lesions. We found that the use of FLAIR sequences increases detection accuracy for cortical-subcortical and periventricular lesions over that associated with proton density- and T2-weighted sequences.

  9. Assessing SaTScan ability to detect space-time clusters in wildfires

    NASA Astrophysics Data System (ADS)

    Costa, Ricardo; Pereira, Mário; Caramelo, Liliana; Vega Orozco, Carmen; Kanevski, Mikhail

    2013-04-01

    Besides classical cluster analysis techniques which are able to analyse spatial and temporal data, SaTScan software analyses space-time data using the spatial, temporal or space-time scan statistics. This software requires the spatial coordinates of the fire, but since in the Rural Fire Portuguese Database (PRFD) (Pereira et al, 2011) the location of each fire is the parish where the ignition occurs, the fire spatial coordinates were considered as coordinates of the centroid of the parishes. Moreover, in general, the northern region is characterized by a large number of small parishes while the southern comprises parish much larger. The objectives of this study are: (i) to test the ability of SaTScan to detect the correct space-time clusters, in what respects to spatial and temporal location and size; and, (ii) to evaluate the effect of the dimensions of the parishes and of aggregating all fires occurred in a parish in a single point. Results obtained with a synthetic database where clusters were artificially created with different densities, in different regions of the country and with different sizes and durations, allow to conclude: the ability of SaTScan to correctly identify the clusters (location, shape and spatial and temporal dimension); and objectively assess the influence of the size of the parishes and windows used in space-time detection. Pereira, M. G., Malamud, B. D., Trigo, R. M., and Alves, P. I.: The history and characteristics of the 1980-2005 Portuguese rural fire database, Nat. Hazards Earth Syst. Sci., 11, 3343-3358, doi:10.5194/nhess-11-3343-2011, 2011 This work is supported by European Union Funds (FEDER/COMPETE - Operational Competitiveness Programme) and by national funds (FCT - Portuguese Foundation for Science and Technology) under the project FCOMP-01-0124-FEDER-022692, the project FLAIR (PTDC/AAC-AMB/104702/2008) and the EU 7th Framework Program through FUME (contract number 243888).

  10. Hypervelocity Impact (HVI). Volume 8; Tile Small Targets A-1, Ag-1, B-1, and Bg-1

    NASA Technical Reports Server (NTRS)

    Gorman, Michael R.; Ziola, Steven M.

    2007-01-01

    During 2003 and 2004, the Johnson Space Center's White Sands Testing Facility in Las Cruces, New Mexico conducted hypervelocity impact tests on the space shuttle wing leading edge. Hypervelocity impact tests were conducted to determine if Micro-Meteoroid/Orbital Debris impacts could be reliably detected and located using simple passive ultrasonic methods. The objective of Targets A-1, Ag-1, B-1, and Bg-1 was to study hypervelocity impacts on the reinforced Shuttle Heat Shield Tiles of the Wing. Impact damage was detected using lightweight, low power instrumentation capable of being used in flight.

  11. Hypervelocity Impact (HVI). Volume 2; WLE Small-Scale Fiberglass Panel Flat Multi-Layer Targets A-1, A-2, and B-1

    NASA Technical Reports Server (NTRS)

    Gorman, Michael R.; Ziola, Steven M.

    2007-01-01

    During 2003 and 2004, the Johnson Space Center's White Sands Testing Facility in Las Cruces, New Mexico conducted hypervelocity impact tests on the space shuttle wing leading edge. Hypervelocity impact tests were conducted to determine if Micro-Meteoroid/Orbital Debris impacts could be reliably detected and located using simple passive ultrasonic methods. The objective of Targets A-1, A-2, and B-2 was to study hypervelocity impacts through multi-layered panels simulating Whipple shields on spacecraft. Impact damage was detected using lightweight, low power instrumentation capable of being used in flight.

  12. Hypervelocity Impact (HVI). Volume 6; WLE High Fidelity Specimen Fg(RCC)-2

    NASA Technical Reports Server (NTRS)

    Gorman, Michael R.; Ziola, Steven M.

    2007-01-01

    During 2003 and 2004, the Johnson Space Center's White Sands Testing Facility in Las Cruces, New Mexico conducted hypervelocity impact tests on the space shuttle wing leading edge. Hypervelocity impact tests were conducted to determine if Micro-Meteoroid/Orbital Debris impacts could be reliably detected and located using simple passive ultrasonic methods. The objective of Target Fg(RCC)-2 was to study hypervelocity impacts through the reinforced carbon-carbon (RCC) panels of the Wing Leading Edge. Fiberglass was used in place of RCC in the initial tests. Impact damage was detected using lightweight, low power instrumentation capable of being used in flight.

  13. Hypervelocity Impact (HVI). Volume 4; WLE Small-Scale Fiberglass Panel Flat Target C-2

    NASA Technical Reports Server (NTRS)

    Gorman, Michael R.; Ziola, Steven M.

    2007-01-01

    During 2003 and 2004, the Johnson Space Center's White Sands Testing Facility in Las Cruces, New Mexico conducted hypervelocity impact tests on the space shuttle wing leading edge. Hypervelocity impact tests were conducted to determine if Micro-Meteoroid/Orbital Debris impacts could be reliably detected and located using simple passive ultrasonic methods. The objective of Target C-2 was to study impacts through the reinforced carboncarbon (RCC) panels of the Wing Leading Edge. Fiberglass was used in place of RCC in the initial tests. Impact damage was detected using lightweight, low power instrumentation capable of being used in flight.

  14. Hypervelocity Impact (HVI). Volume 5; WLE High Fidelity Specimen Fg(RCC)-1

    NASA Technical Reports Server (NTRS)

    Gorman, Michael R.; Ziola, Steven M.

    2007-01-01

    During 2003 and 2004, the Johnson Space Center's White Sands Testing Facility in Las Cruces, New Mexico conducted hypervelocity impact tests on the space shuttle wing leading edge. Hypervelocity impact tests were conducted to determine if Micro-Meteoroid/Orbital Debris impacts could be reliably detected and located using simple passive ultrasonic methods. The objective of Target Fg(RCC)-1 was to study hypervelocity impacts through the reinforced carbon-carbon (RCC) panels of the Wing Leading Edge. Fiberglass was used in place of RCC in the initial tests. Impact damage was detected using lightweight, low power instrumentation capable of being used in flight.

  15. Hypervelocity Impact (HVI). Volume 3; WLE Small-Scale Fiberglass Panel Flat Target C-1

    NASA Technical Reports Server (NTRS)

    Gorman, Michael R.; Ziola, Steven M.

    2007-01-01

    During 2003 and 2004, the Johnson Space Center's White Sands Testing Facility in Las Cruces, New Mexico conducted hypervelocity impact tests on the space shuttle wing leading edge. Hypervelocity impact tests were conducted to determine if Micro-Meteoroid/Orbital Debris impacts could be reliably detected and located using simple passive ultrasonic methods. The objective of Target C-1 was to study hypervelocity impacts on the reinforced carbon-carbon (RCC) panels of the Wing Leading Edge. Fiberglass was used in place of RCC in the initial tests. Impact damage was detected using lightweight, low power instrumentation capable of being used in flight.

  16. Automated acoustic analysis in detection of spontaneous swallows in Parkinson's disease.

    PubMed

    Golabbakhsh, Marzieh; Rajaei, Ali; Derakhshan, Mahmoud; Sadri, Saeed; Taheri, Masoud; Adibi, Peyman

    2014-10-01

    Acoustic monitoring of swallow frequency has become important as the frequency of spontaneous swallowing can be an index for dysphagia and related complications. In addition, it can be employed as an objective quantification of ingestive behavior. Commonly, swallowing complications are manually detected using videofluoroscopy recordings, which require expensive equipment and exposure to radiation. In this study, a noninvasive automated technique is proposed that uses breath and swallowing recordings obtained via a microphone located over the laryngopharynx. Nonlinear diffusion filters were used in which a scale-space decomposition of recorded sound at different levels extract swallows from breath sounds and artifacts. This technique was compared to manual detection of swallows using acoustic signals on a sample of 34 subjects with Parkinson's disease. A speech language pathologist identified five subjects who showed aspiration during the videofluoroscopic swallowing study. The proposed automated method identified swallows with a sensitivity of 86.67 %, a specificity of 77.50 %, and an accuracy of 82.35 %. These results indicate the validity of automated acoustic recognition of swallowing as a fast and efficient approach to objectively estimate spontaneous swallow frequency.

  17. The influence of object shape and center of mass on grasp and gaze

    PubMed Central

    Desanghere, Loni; Marotta, Jonathan J.

    2015-01-01

    Recent experiments examining where participants look when grasping an object found that fixations favor the eventual index finger landing position on the object. Even though the act of picking up an object must involve complex high-level computations such as the visual analysis of object contours, surface properties, knowledge of an object’s function and center of mass (COM) location, these investigations have generally used simple symmetrical objects – where COM and horizontal midline overlap. Less research has been aimed at looking at how variations in object properties, such as differences in curvature and changes in COM location, affect visual and motor control. The purpose of this study was to examine grasp and fixation locations when grasping objects whose COM was positioned to the left or right of the objects horizontal midline (Experiment 1) and objects whose COM was moved progressively further from the midline of the objects based on the alteration of the object’s shape (Experiment 2). Results from Experiment 1 showed that object COM position influenced fixation locations and grasp locations differently, with fixations not as tightly linked to index finger grasp locations as was previously reported with symmetrical objects. Fixation positions were also found to be more central on the non-symmetrical objects. This difference in gaze position may provide a more holistic view, which would allow both index finger and thumb positions to be monitored while grasping. Finally, manipulations of COM distance (Experiment 2) exerted marked effects on the visual analysis of the objects when compared to its influence on grasp locations, with fixation locations more sensitive to these manipulations. Together, these findings demonstrate how object features differentially influence gaze vs. grasp positions during object interaction. PMID:26528207

  18. How Game Location Affects Soccer Performance: T-Pattern Analysis of Attack Actions in Home and Away Matches.

    PubMed

    Diana, Barbara; Zurloni, Valentino; Elia, Massimiliano; Cavalera, Cesare M; Jonsson, Gudberg K; Anguera, M Teresa

    2017-01-01

    The influence of game location on performance has been widely examined in sport contexts. Concerning soccer, game-location affects positively the secondary and tertiary level of performance; however, there are fewer evidences about its effect on game structure (primary level of performance). This study aimed to detect the effect of game location on a primary level of performance in soccer. In particular, the objective was to reveal the hidden structures underlying the attack actions, in both home and away matches played by a top club (Serie A 2012/2013-First Leg). The methodological approach was based on systematic observation, supported by digital recordings and T-pattern analysis. Data were analyzed with THEME 6.0 software. A quantitative analysis, with nonparametric Mann-Whitney test and descriptive statistics, was carried out to test the hypotheses. A qualitative analysis on complex patterns was performed to get in-depth information on the game structure. This study showed that game tactics were significantly different, with home matches characterized by a more structured and varied game than away matches. In particular, a higher number of different patterns, with a higher level of complexity and including more unique behaviors was detected in home matches than in the away ones. No significant differences were found in the number of events coded per game between the two conditions. THEME software, and the corresponding T-pattern detection algorithm, enhance research opportunities by going further than frequency-based analyses, making this method an effective tool in supporting sport performance analysis and training.

  19. How Game Location Affects Soccer Performance: T-Pattern Analysis of Attack Actions in Home and Away Matches

    PubMed Central

    Diana, Barbara; Zurloni, Valentino; Elia, Massimiliano; Cavalera, Cesare M.; Jonsson, Gudberg K.; Anguera, M. Teresa

    2017-01-01

    The influence of game location on performance has been widely examined in sport contexts. Concerning soccer, game-location affects positively the secondary and tertiary level of performance; however, there are fewer evidences about its effect on game structure (primary level of performance). This study aimed to detect the effect of game location on a primary level of performance in soccer. In particular, the objective was to reveal the hidden structures underlying the attack actions, in both home and away matches played by a top club (Serie A 2012/2013—First Leg). The methodological approach was based on systematic observation, supported by digital recordings and T-pattern analysis. Data were analyzed with THEME 6.0 software. A quantitative analysis, with nonparametric Mann–Whitney test and descriptive statistics, was carried out to test the hypotheses. A qualitative analysis on complex patterns was performed to get in-depth information on the game structure. This study showed that game tactics were significantly different, with home matches characterized by a more structured and varied game than away matches. In particular, a higher number of different patterns, with a higher level of complexity and including more unique behaviors was detected in home matches than in the away ones. No significant differences were found in the number of events coded per game between the two conditions. THEME software, and the corresponding T-pattern detection algorithm, enhance research opportunities by going further than frequency-based analyses, making this method an effective tool in supporting sport performance analysis and training. PMID:28878712

  20. Curvature methods of damage detection using digital image correlation

    NASA Astrophysics Data System (ADS)

    Helfrick, Mark N.; Niezrecki, Christopher; Avitabile, Peter

    2009-03-01

    Analytical models have shown that local damage in a structure can be detected by studying changes in the curvature of the structure's displaced shape while under an applied load. In order for damage to be detected, located, and quantified using curvature methods, a spatially dense set of measurement points is required on the structure of interest and the change in curvature must be measurable. Experimental testing done to validate the theory is often plagued by sparse data sets and experimental noise. Furthermore, the type of load, the location and severity of the damage, and the mechanical properties (material and geometry) of the structure have a significant effect on how much the curvature will change. Within this paper, three-dimensional (3D) Digital Image Correlation (DIC) as one possible method for detecting damage through curvature methods is investigated. 3D DIC is a non-contacting full-field measurement technique which uses a stereo pair of digital cameras to capture surface shape. This approach allows for an extremely dense data set across the entire visible surface of an object. A test is performed to validate the approach on an aluminum cantilever beam. A dynamic load is applied to the beam which allows for measurements to be made of the beam's response at each of its first three resonant frequencies, corresponding to the first three bending modes of the structure. DIC measurements are used with damage detection algorithms to predict damage location with varying levels of damage inflicted in the form of a crack with a prescribed depth. The testing demonstrated that this technique will likely only work with structures where a large displaced shape is easily achieved and in cases where the damage is relatively severe. Practical applications and limitations of the technique are discussed.

  1. Development of blood vessel searching system for HMS

    NASA Astrophysics Data System (ADS)

    Kandani, Hirofumi; Uenoya, Toshiyuki; Uetsuji, Yasutomo; Nakamachi, Eiji

    2008-08-01

    In this study, we develop a new 3D miniature blood vessel searching system by using near-infrared LED light, a CMOS camera module with an image processing unit for a health monitoring system (HMS), a drug delivery system (DDS) which requires very high performance for automatic micro blood volume extraction and automatic blood examination. Our objective is to fabricate a highly reliable micro detection system by utilizing image capturing, image processing, and micro blood extraction devices. For the searching system to determine 3D blood vessel location, we employ the stereo method. The stereo method is a common photogrammetric method. It employs the optical path principle to detect 3D location of the disparity between two cameras. The principle for blood vessel visualization is derived from the ratio of hemoglobin's absorption of the near-infrared LED light. To get a high quality blood vessel image, we adopted an LED, with peak a wavelength of 940nm. The LED is set on the dorsal side of the finger and it irradiates the human finger. A blood vessel image is captured by a CMOS camera module, which is set below the palmer side of the finger. 2D blood vessel location can be detected by the luminance distribution of a one pixel line. To examine the accuracy of our detecting system, we carried out experiments using finger phantoms with blood vessel diameters of 0.5, 0.75, 1.0mm, at the depths of 0.5 ~ 2.0 mm from the phantom's surface. The experimental results of the estimated depth obtained by our detecting system shows good agreements with the given depths, and the viability of this system is confirmed.

  2. On marginally resolved objects in optical interferometry

    NASA Astrophysics Data System (ADS)

    Lachaume, R.

    2003-03-01

    With the present and soon-to-be breakthrough of optical interferometry, countless objects shall be within reach of interferometers; yet, most of them are expected to remain only marginally resolved with hectometric baselines. In this paper, we tackle the problem of deriving the properties of a marginally resolved object from its optical visibilities. We show that they depend on the moments of flux distribution of the object: centre, mean angular size, asymmetry, and curtosis. We also point out that the visibility amplitude is a second-order phenomenon, whereas the phase is a combination of a first-order term, giving the location of the photocentre, and a third-order term, more difficult to detect than the visibility amplitude, giving an asymmetry coefficient of the object. We then demonstrate that optical visibilities are not a good model constraint while the object stays marginally resolved, unless observations are carried out at different wavelengths. Finally, we show an application of this formalism to circumstellar discs.

  3. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery

    PubMed Central

    Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng

    2016-01-01

    Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness. PMID:27023564

  4. Pedestrian Detection and Tracking from Low-Resolution Unmanned Aerial Vehicle Thermal Imagery.

    PubMed

    Ma, Yalong; Wu, Xinkai; Yu, Guizhen; Xu, Yongzheng; Wang, Yunpeng

    2016-03-26

    Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.

  5. Determining the Locations of Brown Dwarfs in Young Star Clusters

    NASA Technical Reports Server (NTRS)

    Porter, Lauren A.

    2005-01-01

    Brown dwarfs are stellar objects with masses less than 0.08 times that of the Sun that are unable to sustain nuclear fusion. Because of the lack of fusion, they are relatively cold, allowing the formation of methane and water molecules in their atmospheres. Brown dwarfs can be detected by examining stars' absorption spectra in the near-infrared to see whether methane and water are present. The objective of this research is to determine the locations of brown dwarfs in Rho Ophiuchus, a star cluster that is only 1 million years old. The cluster was observed in four filters in the near-infrared range using the Wide-Field Infra-Red Camera (WIRC) on the 100" DuPont Telescope and Persson's Auxiliary Nasymith Infrared Camera (PANIC) on the 6.5-m Magellan Telescope. By comparing the magnitude of a star in each of the four filters, an absorption spectrum can be formed. This project uses standard astronomical techniques to reduce raw frames into final images and perform photometry on them to obtain publishable data. Once this is done, it will be possible to determine the locations and magnitudes of brown dwarfs within the cluster.

  6. A Label Propagation Approach for Detecting Buried Objects in Handheld GPR Data

    DTIC Science & Technology

    2016-04-17

    regions of interest that correspond to locations with anomalous signatures. Second, a classifier (or an ensemble of classifiers ) is used to assign a...investigated for almost two decades and several classifiers have been developed. Most of these methods are based on the supervised learning paradigm where...labeled target and clutter signatures are needed to train a classifier to discriminate between the two classes. Typically, large and diverse labeled

  7. A wireless object location detector enabling people with developmental disabilities to control environmental stimulation through simple occupational activities with Nintendo Wii Balance Boards.

    PubMed

    Shih, Ching-Hsiang; Chang, Man-Ling

    2012-01-01

    The latest researches have adopted software technology, turning the Nintendo Wii Balance Board into a high performance standing location detector with a newly developed standing location detection program (SLDP). This study extended SLDP functionality to assess whether two people with developmental disabilities would be able to actively perform simple occupational activities by controlling their favorite environmental stimulation using Nintendo Wii Balance Boards and SLDP software. An ABAB design was adopted in this study to perform the tests. The test results showed that, during the intervention phases, both participants significantly increased their target response (i.e. simple occupational activity) to activate the control system to produce environmental stimulation. The practical and developmental implications of the findings are discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Fusion of Building Information and Range Imaging for Autonomous Location Estimation in Indoor Environments

    PubMed Central

    Kohoutek, Tobias K.; Mautz, Rainer; Wegner, Jan D.

    2013-01-01

    We present a novel approach for autonomous location estimation and navigation in indoor environments using range images and prior scene knowledge from a GIS database (CityGML). What makes this task challenging is the arbitrary relative spatial relation between GIS and Time-of-Flight (ToF) range camera further complicated by a markerless configuration. We propose to estimate the camera's pose solely based on matching of GIS objects and their detected location in image sequences. We develop a coarse-to-fine matching strategy that is able to match point clouds without any initial parameters. Experiments with a state-of-the-art ToF point cloud show that our proposed method delivers an absolute camera position with decimeter accuracy, which is sufficient for many real-world applications (e.g., collision avoidance). PMID:23435055

  9. Spatial language and converseness.

    PubMed

    Burigo, Michele; Coventry, Kenny R; Cangelosi, Angelo; Lynott, Dermot

    2016-12-01

    Typical spatial language sentences consist of describing the location of an object (the located object) in relation to another object (the reference object) as in "The book is above the vase". While it has been suggested that the properties of the located object (the book) are not translated into language because they are irrelevant when exchanging location information, it has been shown that the orientation of the located object affects the production and comprehension of spatial descriptions. In line with the claim that spatial language apprehension involves inferences about relations that hold between objects it has been suggested that during spatial language apprehension people use the orientation of the located object to evaluate whether the logical property of converseness (e.g., if "the book is above the vase" is true, then also "the vase is below the book" must be true) holds across the objects' spatial relation. In three experiments using sentence acceptability rating tasks we tested this hypothesis and demonstrated that when converseness is violated people's acceptability ratings of a scene's description are reduced indicating that people do take into account geometric properties of the located object and use it to infer logical spatial relations.

  10. DIORAMA Location Type User's Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Terry, James Russell

    2015-01-29

    The purpose of this report is to present the current design and implementation of the DIORAMA location type object (LocationType) and to provide examples and use cases. The LocationType object is included in the diorama-app package in the diorama::types namespace. Abstractly, the object is intended to capture the full time history of the location of an object or reference point. For example, a location may be speci ed as a near-Earth orbit in terms of a two-line element set, in which case the location type is capable of propagating the orbit both forward and backward in time to provide amore » location for any given time. Alternatively, the location may be speci ed as a xed set of geodetic coordinates (latitude, longitude, and altitude), in which case the geodetic location of the object is expected to remain constant for all time. From an implementation perspective, the location type is de ned as a union of multiple independent objects defi ned in the DIORAMA tle library. Types presently included in the union are listed and described in subsections below, and all conversions or transformation between these location types are handled by utilities provided by the tle library with the exception of the \\special-values" location type.« less

  11. Confirming nasogastric tube placement: Is the colorimeter as sensitive and specific as X-ray? A diagnostic accuracy study.

    PubMed

    Mordiffi, Siti Zubaidah; Goh, Mien Li; Phua, Jason; Chan, Yiong-Huak

    2016-09-01

    The effect of delivering enteral nutrition or medications via a nasogastric tube that is inadvertently located in the tracheobronchial tract can cause respiratory complications. Although radiographic examination is accepted as the gold standard for confirming the position of patients' enteral tubes, it is costly, involves risks of radiation, and is not failsafe. Studies using carbon dioxide sensors to detect inadvertent nasogastric tube placements have been conducted in intensive care settings. However, none involved patients in general wards. The objective of this study was to ascertain the diagnostic measure of colorimeter, with radiographic examination as the reference standard, to confirm the location of nasogastric tubes in patients. A prospective observational study of a diagnostic test. This study was conducted in the general wards of an approximately 1100-bed acute care tertiary hospital of an Academic Medical Center in Singapore. Adult patients with nasogastric tubes admitted to the general wards were recruited into the study. The colorimeter was attached to the nasogastric tube to detect for the presence of carbon dioxide, suggestive of a tracheobronchial placement. The exact location of the nasogastric tube was subsequently confirmed by a radiographic examination. A total of 192 tests were undertaken. The colorimeter detected carbon dioxide in 29 tested nasogastric tubes, of which radiographic examination confirmed that four tubes were located in the tracheobronchial tract. The colorimeter failed to detect carbon dioxide in one nasogastric tube that was located in the tracheobronchial tract, thus, demonstrating a sensitivity of 0.80 [95% CI (0.376, 0.964)]. The colorimeter detected absence of carbon dioxide in 163 tested nasogastric tubes in which radiographic examination confirmed 160 gastrointestinal and one tracheobronchial placements, demonstrating a specificity of 0.865 [95% CI (0.808, 0.907)]. The colorimeter detected one tracheobronchial nasogastric tube placement that the radiographic examination was misinterpreted. The study found that the use of the colorimeter in the general ward setting was not 100% sensitive or specific in ascertaining the location of a nasogastric tube as previously reported by many studies undertaken in intensive care settings. This is the first study on the use of a colorimeter to confirm the placement of a nasogastric tube in adult patients in the general ward setting. More research on the use of a colorimeter in the general ward setting and its potential use in certain processes for confirming the placement of a nasogastric tube is warranted. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps.

    PubMed

    Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.

  13. The roles of scene priming and location priming in object-scene consistency effects

    PubMed Central

    Heise, Nils; Ansorge, Ulrich

    2014-01-01

    Presenting consistent objects in scenes facilitates object recognition as compared to inconsistent objects. Yet the mechanisms by which scenes influence object recognition are still not understood. According to one theory, consistent scenes facilitate visual search for objects at expected places. Here, we investigated two predictions following from this theory: If visual search is responsible for consistency effects, consistency effects could be weaker (1) with better-primed than less-primed object locations, and (2) with less-primed than better-primed scenes. In Experiments 1 and 2, locations of objects were varied within a scene to a different degree (one, two, or four possible locations). In addition, object-scene consistency was studied as a function of progressive numbers of repetitions of the backgrounds. Because repeating locations and backgrounds could facilitate visual search for objects, these repetitions might alter the object-scene consistency effect by lowering of location uncertainty. Although we find evidence for a significant consistency effect, we find no clear support for impacts of scene priming or location priming on the size of the consistency effect. Additionally, we find evidence that the consistency effect is dependent on the eccentricity of the target objects. These results point to only small influences of priming to object-scene consistency effects but all-in-all the findings can be reconciled with a visual-search explanation of the consistency effect. PMID:24910628

  14. Audiovisual communication of object-names improves the spatial accuracy of recalled object-locations in topographic maps

    PubMed Central

    Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank

    2017-01-01

    Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory. PMID:29059237

  15. Influence of local objects on hippocampal representations: landmark vectors and memory

    PubMed Central

    Deshmukh, Sachin S.; Knierim, James J.

    2013-01-01

    The hippocampus is thought to represent nonspatial information in the context of spatial information. An animal can derive both spatial information as well as nonspatial information from the objects (landmarks) it encounters as it moves around in an environment. Here, we demonstrate correlates of both object-derived spatial as well as nonspatial information in the hippocampus of rats foraging in the presence of objects. We describe a new form of CA1 place cells, called landmark-vector cells, that encode spatial locations as a vector relationship to local landmarks. Such landmark vector relationships can be dynamically encoded. Of the 26 CA1 neurons that developed new fields in the course of a day’s recording sessions, in 8 cases the new fields were located at a similar distance and direction from a landmark as the initial field was located relative to a different landmark. We also demonstrate object-location memory in the hippocampus. When objects were removed from an environment or moved to new locations, a small number of neurons in CA1 and CA3 increased firing at the locations where the objects used to be. In some neurons, this increase occurred only in one location, indicating object +place conjunctive memory; in other neurons the increase in firing was seen at multiple locations where an object used to be. Taken together, these results demonstrate that the spatially restricted firing of hippocampal neurons encode multiple types of information regarding the relationship between an animal’s location and the location of objects in its environment. PMID:23447419

  16. Glucose improves object-location binding in visual-spatial working memory.

    PubMed

    Stollery, Brian; Christian, Leonie

    2016-02-01

    There is evidence that glucose temporarily enhances cognition and that processes dependent on the hippocampus may be particularly sensitive. As the hippocampus plays a key role in binding processes, we examined the influence of glucose on memory for object-location bindings. This study aims to study how glucose modifies performance on an object-location memory task, a task that draws heavily on hippocampal function. Thirty-one participants received 30 g glucose or placebo in a single 1-h session. After seeing between 3 and 10 objects (words or shapes) at different locations in a 9 × 9 matrix, participants attempted to immediately reproduce the display on a blank 9 × 9 matrix. Blood glucose was measured before drink ingestion, mid-way through the session, and at the end of the session. Glucose significantly improves object-location binding (d = 1.08) and location memory (d = 0.83), but not object memory (d = 0.51). Increasing working memory load impairs object memory and object-location binding, and word-location binding is more successful than shape-location binding, but the glucose improvement is robust across all difficulty manipulations. Within the glucose group, higher levels of circulating glucose are correlated with better binding memory and remembering the locations of successfully recalled objects. The glucose improvements identified are consistent with a facilitative impact on hippocampal function. The findings are discussed in the context of the relationship between cognitive processes, hippocampal function, and the implications for glucose's mode of action.

  17. CT-guided automated detection of lung tumors on PET images

    NASA Astrophysics Data System (ADS)

    Cui, Yunfeng; Zhao, Binsheng; Akhurst, Timothy J.; Yan, Jiayong; Schwartz, Lawrence H.

    2008-03-01

    The calculation of standardized uptake values (SUVs) in tumors on serial [ 18F]2-fluoro-2-deoxy-D-glucose ( 18F-FDG) positron emission tomography (PET) images is often used for the assessment of therapy response. We present a computerized method that automatically detects lung tumors on 18F-FDG PET/Computed Tomography (CT) images using both anatomic and metabolic information. First, on CT images, relevant organs, including lung, bone, liver and spleen, are automatically identified and segmented based on their locations and intensity distributions. Hot spots (SUV >= 1.5) on 18F-FDG PET images are then labeled using the connected component analysis. The resultant "hot objects" (geometrically connected hot spots in three dimensions) that fall into, reside at the edges or are in the vicinity of the lungs are considered as tumor candidates. To determine true lesions, further analyses are conducted, including reduction of tumor candidates by the masking out of hot objects within CT-determined normal organs, and analysis of candidate tumors' locations, intensity distributions and shapes on both CT and PET. The method was applied to 18F-FDG-PET/CT scans from 9 patients, on which 31 target lesions had been identified by a nuclear medicine radiologist during a Phase II lung cancer clinical trial. Out of 31 target lesions, 30 (97%) were detected by the computer method. However, sensitivity and specificity were not estimated because not all lesions had been marked up in the clinical trial. The method effectively excluded the hot spots caused by mediastinum, liver, spleen, skeletal muscle and bone metastasis.

  18. Tc1 mouse model of trisomy-21 dissociates properties of short- and long-term recognition memory.

    PubMed

    Hall, Jessica H; Wiseman, Frances K; Fisher, Elizabeth M C; Tybulewicz, Victor L J; Harwood, John L; Good, Mark A

    2016-04-01

    The present study examined memory function in Tc1 mice, a transchromosomic model of Down syndrome (DS). Tc1 mice demonstrated an unusual delay-dependent deficit in recognition memory. More specifically, Tc1 mice showed intact immediate (30sec), impaired short-term (10-min) and intact long-term (24-h) memory for objects. A similar pattern was observed for olfactory stimuli, confirming the generality of the pattern across sensory modalities. The specificity of the behavioural deficits in Tc1 mice was confirmed using APP overexpressing mice that showed the opposite pattern of object memory deficits. In contrast to object memory, Tc1 mice showed no deficit in either immediate or long-term memory for object-in-place information. Similarly, Tc1 mice showed no deficit in short-term memory for object-location information. The latter result indicates that Tc1 mice were able to detect and react to spatial novelty at the same delay interval that was sensitive to an object novelty recognition impairment. These results demonstrate (1) that novelty detection per se and (2) the encoding of visuo-spatial information was not disrupted in adult Tc1 mice. The authors conclude that the task specific nature of the short-term recognition memory deficit suggests that the trisomy of genes on human chromosome 21 in Tc1 mice impacts on (perirhinal) cortical systems supporting short-term object and olfactory recognition memory. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  19. Morphology and astrometry of Infrared-Faint Radio Sources

    NASA Astrophysics Data System (ADS)

    Middelberg, Enno; Norris, Ray; Randall, Kate; Mao, Minnie; Hales, Christopher

    2008-10-01

    Infrared-Faint Radio Sources, or IFRS, are an unexpected class of object discovered in the Australia Telescope Large Area Survey, ATLAS. They are compact 1.4GHz radio sources with no visible counterparts in co-located (relatively shallow) Spitzer infrared and optical images. We have detected two of these objects with VLBI, indicating the presence of an AGN. These observations and our ATLAS data indicate that IFRS are extended on scales of arcseconds, and we wish to image their morphologies to obtain clues about their nature. These observations will also help us to select optical counterparts from very deep, and hence crowded, optical images which we have proposed. With these data in hand, we will be able to compare IFRS to known object types and to apply for spectroscopy to obtain their redshifts.

  20. Beyond CMB cosmic variance limits on reionization with the polarized Sunyaev-Zel'dovich effect

    NASA Astrophysics Data System (ADS)

    Meyers, Joel; Meerburg, P. Daniel; van Engelen, Alexander; Battaglia, Nicholas

    2018-05-01

    Upcoming cosmic microwave background (CMB) surveys will soon make the first detection of the polarized Sunyaev-Zel'dovich effect, the linear polarization generated by the scattering of CMB photons on the free electrons present in collapsed objects. Measurement of this polarization along with knowledge of the electron density of the objects allows a determination of the quadrupolar temperature anisotropy of the CMB as viewed from the space-time location of the objects. Maps of these remote temperature quadrupoles have several cosmological applications. Here we propose a new application: the reconstruction of the cosmological reionization history. We show that with quadrupole measurements out to redshift 3, constraints on the mean optical depth can be improved by an order of magnitude beyond the CMB cosmic variance limit.

  1. Fluorescence tomography characterization for sub-surface imaging with protoporphyrin IX

    PubMed Central

    Kepshire, Dax; Davis, Scott C.; Dehghani, Hamid; Paulsen, Keith D.; Pogue, Brian W.

    2009-01-01

    Optical imaging of fluorescent objects embedded in a tissue simulating medium was characterized using non-contact based approaches to fluorescence remittance imaging (FRI) and sub-surface fluorescence diffuse optical tomography (FDOT). Using Protoporphyrin IX as a fluorescent agent, experiments were performed on tissue phantoms comprised of typical in-vivo tumor to normal tissue contrast ratios, ranging from 3.5:1 up to 10:1. It was found that tomographic imaging was able to recover interior inclusions with high contrast relative to the background; however, simple planar fluorescence imaging provided a superior contrast to noise ratio. Overall, FRI performed optimally when the object was located on or close to the surface and, perhaps most importantly, FDOT was able to recover specific depth information about the location of embedded regions. The results indicate that an optimal system for localizing embedded fluorescent regions should combine fluorescence reflectance imaging for high sensitivity and sub-surface tomography for depth detection, thereby allowing more accurate localization in all three directions within the tissue. PMID:18545571

  2. Crowding by a single bar: probing pattern recognition mechanisms in the visual periphery.

    PubMed

    Põder, Endel

    2014-11-06

    Whereas visual crowding does not greatly affect the detection of the presence of simple visual features, it heavily inhibits combining them into recognizable objects. Still, crowding effects have rarely been directly related to general pattern recognition mechanisms. In this study, pattern recognition mechanisms in visual periphery were probed using a single crowding feature. Observers had to identify the orientation of a rotated T presented briefly in a peripheral location. Adjacent to the target, a single bar was presented. The bar was either horizontal or vertical and located in a random direction from the target. It appears that such a crowding bar has very strong and regular effects on the identification of the target orientation. The observer's responses are determined by approximate relative positions of basic visual features; exact image-based similarity to the target is not important. A version of the "standard model" of object recognition with second-order features explains the main regularities of the data. © 2014 ARVO.

  3. Symmetric Objects Become Special in Perception Because of Generic Computations in Neurons

    PubMed Central

    Pramod, R. T.

    2017-01-01

    Symmetry is a salient visual property: It is easy to detect and influences perceptual phenomena from segmentation to recognition. Yet researchers know little about its neural basis. Using recordings from single neurons in monkey IT cortex, we asked whether symmetry—being an emergent property—induces nonlinear interactions between object parts. Remarkably, we found no such deviation: Whole-object responses were always the sum of responses to the object’s parts, regardless of symmetry. The only defining characteristic of symmetric objects was that they were more distinctive compared with asymmetric objects. This was a consequence of neurons preferring the same part across locations within an object. Just as mixing diverse paints produces a homogeneous overall color, adding heterogeneous parts within an asymmetric object renders it indistinct. In contrast, adding identical parts within a symmetric object renders it distinct. This distinctiveness systematically predicted human symmetry judgments, and it explains many previous observations about symmetry perception. Thus, symmetry becomes special in perception despite being driven by generic computations at the level of single neurons. PMID:29219748

  4. Contribution of flow-volume curves to the detection of central airway obstruction*

    PubMed Central

    Raposo, Liliana Bárbara Perestrelo de Andrade e; Bugalho, António; Gomes, Maria João Marques

    2013-01-01

    OBJECTIVE: To assess the sensitivity and specificity of flow-volume curves in detecting central airway obstruction (CAO), and to determine whether their quantitative and qualitative criteria are associated with the location, type and degree of obstruction. METHODS: Over a four-month period, we consecutively evaluated patients with bronchoscopy indicated. Over a one-week period, all patients underwent clinical evaluation, flow-volume curve, bronchoscopy, and completed a dyspnea scale. Four reviewers, blinded to quantitative and clinical data, and bronchoscopy results, classified the morphology of the curves. A fifth reviewer determined the morphological criteria, as well as the quantitative criteria. RESULTS: We studied 82 patients, 36 (44%) of whom had CAO. The sensitivity and specificity of the flow-volume curves in detecting CAO were, respectively, 88.9% and 91.3% (quantitative criteria) and 30.6% and 93.5% (qualitative criteria). The most prevalent quantitative criteria in our sample were FEF50%/FIF50% ≥ 1, in 83% of patients, and FEV1/PEF ≥ 8 mL . L–1 . min–1, in 36%, both being associated with the type, location, and degree of obstruction (p < 0.05). There was concordance among the reviewers as to the presence of CAO. There is a relationship between the degree of obstruction and dyspnea. CONCLUSIONS: The quantitative criteria should always be calculated for flow-volume curves in order to detect CAO, because of the low sensitivity of the qualitative criteria. Both FEF50%/FIF50% ≥ 1 and FEV1/PEF ≥ 8 mL . L–1 . min–1 were associated with the location, type and degree of obstruction. PMID:24068266

  5. Enhanced Detection of Sea-Disposed Man-Made Objects in Backscatter Data

    NASA Astrophysics Data System (ADS)

    Edwards, M.; Davis, R. B.

    2016-12-01

    The Hawai'i Undersea Military Munitions Assessment (HUMMA) project developed software to increase data visualization capabilities applicable to seafloor reflectivity datasets acquired by a variety of bottom-mapping sonar systems. The purpose of these improvements is to detect different intensity values within an arbitrary amplitude range that may be associated with relative target reflectivity as well as extend the overall amplitude range across which detailed dynamic contrast may be effectively displayed. The backscatter dataset used to develop this software imaged tens of thousands of reflective targets resting on the seabed that were systematically sea disposed south of Oahu, Hawaii, around the end of World War II in waters ranging from 300-600 meters depth. Human-occupied and remotely operated vehicles conducted ground-truth video and photographic reconnaissance of thousands of these reflective targets, documenting and geo-referencing long curvilinear trials of items including munitions, paint cans, airplane parts, scuttled ships, cars and bundled anti-submarine nets. Edwards et al. [2012] determined that most individual trails consist of objects of one particular type. The software described in this presentation, in combination with the ground-truth images, was developed to help recognize different types of objects based on reflectivity, size, and shape from altitudes of tens of meters above the seabed. The fundamental goal of the software is to facilitate rapid underway detection and geo-location of specific sea-disposed objects so their impact on the environment can be assessed.

  6. Optical polarization of high-energy BL Lacertae objects

    NASA Astrophysics Data System (ADS)

    Hovatta, T.; Lindfors, E.; Blinov, D.; Pavlidou, V.; Nilsson, K.; Kiehlmann, S.; Angelakis, E.; Fallah Ramazani, V.; Liodakis, I.; Myserlis, I.; Panopoulou, G. V.; Pursimo, T.

    2016-12-01

    Context. We investigate the optical polarization properties of high-energy BL Lac objects using data from the RoboPol blazar monitoring program and the Nordic Optical Telescope. Aims: We wish to understand if there are differences between the BL Lac objects that have been detected with the current-generation TeV instruments and those objects that have not yet been detected. Methods: We used a maximum-likelihood method to investigate the optical polarization fraction and its variability in these sources. In order to study the polarization position angle variability, we calculated the time derivative of the electric vector position angle (EVPA) change. We also studied the spread in the Stokes Q/I-U/I plane and rotations in the polarization plane. Results: The mean polarization fraction of the TeV-detected BL Lacs is 5%, while the non-TeV sources show a higher mean polarization fraction of 7%. This difference in polarization fraction disappears when the dilution by the unpolarized light of the host galaxy is accounted for. The TeV sources show somewhat lower fractional polarization variability amplitudes than the non-TeV sources. Also the fraction of sources with a smaller spread in the Q/I-U/I plane and a clumped distribution of points away from the origin, possibly indicating a preferred polarization angle, is larger in the TeV than in the non-TeV sources. These differences between TeV and non-TeV samples seem to arise from differences between intermediate and high spectral peaking sources instead of the TeV detection. When the EVPA variations are studied, the rate of EVPA change is similar in both samples. We detect significant EVPA rotations in both TeV and non-TeV sources, showing that rotations can occur in high spectral peaking BL Lac objects when the monitoring cadence is dense enough. Our simulations show that we cannot exclude a random walk origin for these rotations. Conclusions: These results indicate that there are no intrinsic differences in the polarization properties of the TeV-detected and non-TeV-detected high-energy BL Lac objects. This suggests that the polarization properties are not directly related to the TeV-detection, but instead the TeV loudness is connected to the general flaring activity, redshift, and the synchrotron peak location. The polarization curve data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/596/A78

  7. How high is visual short-term memory capacity for object layout?

    PubMed

    Sanocki, Thomas; Sellers, Eric; Mittelstadt, Jeff; Sulman, Noah

    2010-05-01

    Previous research measuring visual short-term memory (VSTM) suggests that the capacity for representing the layout of objects is fairly high. In four experiments, we further explored the capacity of VSTM for layout of objects, using the change detection method. In Experiment 1, participants retained most of the elements in displays of 4 to 8 elements. In Experiments 2 and 3, with up to 20 elements, participants retained many of them, reaching a capacity of 13.4 stimulus elements. In Experiment 4, participants retained much of a complex naturalistic scene. In most cases, increasing display size caused only modest reductions in performance, consistent with the idea of configural, variable-resolution grouping. The results indicate that participants can retain a substantial amount of scene layout information (objects and locations) in short-term memory. We propose that this is a case of remote visual understanding, where observers' ability to integrate information from a scene is paramount.

  8. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  9. Land Cover Analysis by Using Pixel-Based and Object-Based Image Classification Method in Bogor

    NASA Astrophysics Data System (ADS)

    Amalisana, Birohmatin; Rokhmatullah; Hernina, Revi

    2017-12-01

    The advantage of image classification is to provide earth’s surface information like landcover and time-series changes. Nowadays, pixel-based image classification technique is commonly performed with variety of algorithm such as minimum distance, parallelepiped, maximum likelihood, mahalanobis distance. On the other hand, landcover classification can also be acquired by using object-based image classification technique. In addition, object-based classification uses image segmentation from parameter such as scale, form, colour, smoothness and compactness. This research is aimed to compare the result of landcover classification and its change detection between parallelepiped pixel-based and object-based classification method. Location of this research is Bogor with 20 years range of observation from 1996 until 2016. This region is famous as urban areas which continuously change due to its rapid development, so that time-series landcover information of this region will be interesting.

  10. Target tracking and surveillance by fusing stereo and RFID information

    NASA Astrophysics Data System (ADS)

    Raza, Rana H.; Stockman, George C.

    2012-06-01

    Ensuring security in high risk areas such as an airport is an important but complex problem. Effectively tracking personnel, containers, and machines is a crucial task. Moreover, security and safety require understanding the interaction of persons and objects. Computer vision (CV) has been a classic tool; however, variable lighting, imaging, and random occlusions present difficulties for real-time surveillance, resulting in erroneous object detection and trajectories. Determining object ID via CV at any instance of time in a crowded area is computationally prohibitive, yet the trajectories of personnel and objects should be known in real time. Radio Frequency Identification (RFID) can be used to reliably identify target objects and can even locate targets at coarse spatial resolution, while CV provides fuzzy features for target ID at finer resolution. Our research demonstrates benefits obtained when most objects are "cooperative" by being RFID tagged. Fusion provides a method to simplify the correspondence problem in 3D space. A surveillance system can query for unique object ID as well as tag ID information, such as target height, texture, shape and color, which can greatly enhance scene analysis. We extend geometry-based tracking so that intermittent information on ID and location can be used in determining a set of trajectories of N targets over T time steps. We show that partial-targetinformation obtained through RFID can reduce computation time (by 99.9% in some cases) and also increase the likelihood of producing correct trajectories. We conclude that real-time decision-making should be possible if the surveillance system can integrate information effectively between the sensor level and activity understanding level.

  11. Short communication: Conservation of Streptococcus uberis adhesion molecule and the sua gene in strains of Streptococcus uberis isolated from geographically diverse areas.

    PubMed

    Yuan, Ying; Dego, Oudessa Kerro; Chen, Xueyan; Abadin, Eurife; Chan, Shangfeng; Jory, Lauren; Kovacevic, Steven; Almeida, Raul A; Oliver, Stephen P

    2014-12-01

    The objective was to identify and sequence the sua gene (GenBank no. DQ232760; http://www.ncbi.nlm.nih.gov/genbank/) and detect Streptococcus uberis adhesion molecule (SUAM) expression by Western blot using serum from naturally S. uberis-infected cows in strains of S. uberis isolated in milk from cows with mastitis from geographically diverse areas of the world. All strains evaluated yielded a 4.4-kb sua-containing PCR fragment that was subsequently sequenced. Deduced SUAM AA sequences from those S. uberis strains evaluated shared >97% identity. The pepSUAM sequence located at the N terminus of SUAM was >99% identical among strains of S. uberis. Streptococcus uberis adhesion molecule expression was detected in all strains of S. uberis tested. These results suggest that sua is ubiquitous among strains of S. uberis isolated from diverse geographic locations and that SUAM is immunogenic. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Next Generation RFID-Based Medical Service Management System Architecture in Wireless Sensor Network

    NASA Astrophysics Data System (ADS)

    Tolentino, Randy S.; Lee, Kijeong; Kim, Yong-Tae; Park, Gil-Cheol

    Radio Frequency Identification (RFID) and Wireless Sensor Network (WSN) are two important wireless technologies that have wide variety of applications and provide unlimited future potentials most especially in healthcare systems. RFID is used to detect presence and location of objects while WSN is used to sense and monitor the environment. Integrating RFID with WSN not only provides identity and location of an object but also provides information regarding the condition of the object carrying the sensors enabled RFID tag. However, there isn't any flexible and robust communication infrastructure to integrate these devices into an emergency care setting. An efficient wireless communication substrate for medical devices that addresses ad hoc or fixed network formation, naming and discovery, transmission efficiency of data, data security and authentication, as well as filtration and aggregation of vital sign data need to be study and analyze. This paper proposed an efficient next generation architecture for RFID-based medical service management system in WSN that possesses the essential elements of each future medical application that are integrated with existing medical practices and technologies in real-time, remote monitoring, in giving medication, and patient status tracking assisted by embedded wearable wireless sensors which are integrated in wireless sensor network.

  13. ALMA Observations of the Archetypal “Hot Core” That Is Not: Orion-KL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orozco-Aguilera, M. T.; Zapata, Luis A.; Hirota, Tomoya

    We present sensitive high angular resolution (∼0.″1–0.″3) continuum Atacama Large Millimeter/Submillimeter Array (ALMA) observations of the archetypal hot core located in the Orion Kleinmann-Low (KL) region. The observations were made in five different spectral bands (bands 3, 6, 7, 8, and 9) covering a very broad range of frequencies (149–658 GHz). Apart from the well-known millimeter emitting objects located in this region (Orion Source I and BN), we report the first submillimeter detection of three compact continuum sources (ALMA1–3) in the vicinities of the Orion-KL hot molecular core. These three continuum objects have spectral indices between 1.47 and 1.56, andmore » brightness temperatures between 100 and 200 K at 658 GHz, suggesting that we are seeing moderate, optically thick dust emission with possible grain growth. However, as these objects are not associated with warm molecular gas, and some of them are farther out from the molecular core, we thus conclude that they cannot heat the molecular core. This result favors the hypothesis that the hot molecular core in Orion-KL core is heated externally.« less

  14. The Clusters AgeS Experiment (CASE). Variable Stars in the Field of the Globular Cluster NGC 6362

    NASA Astrophysics Data System (ADS)

    Kaluzny, J.; Thompson, I. B.; Rozyczka, M.; Pych, W.; Narloch, W.

    2014-12-01

    The field of the globular cluster NGC 6362 was monitored between 1995 and 2009 in a search for variable stars. BV light curves were obtained for 69 periodic variable stars including 34 known RR Lyr stars, 10 known objects of other types and 25 newly detected variable stars. Among the latter we identified 18 proper-motion members of the cluster: seven detached eclipsing binaries (DEBs), six SX Phe stars, two W UMa binaries, two spotted red giants, and a very interesting eclipsing binary composed of two red giants - the first example of such a system found in a globular cluster. Five of the DEBs are located at the turnoff region, and the remaining two are redward of the lower main sequence. Eighty-four objects from the central 9×9 arcmin2 of the cluster were found in the region of cluster blue stragglers. Of these 70 are proper motion (PM) members of NGC 6362 (including all SX Phe and two W UMa stars), and five are field stars. The remaining nine objects lacking PM information are located at the very core of the cluster, and as such they are likely genuine blue stragglers.

  15. Navy/Marine Corps innovative science and technology developments for future enhanced mine detection capabilities

    NASA Astrophysics Data System (ADS)

    Holloway, John H., Jr.; Witherspoon, Ned H.; Miller, Richard E.; Davis, Kenn S.; Suiter, Harold R.; Hilton, Russell J.

    2000-08-01

    JMDT is a Navy/Marine Corps 6.2 Exploratory Development program that is closely coordinated with the 6.4 COBRA acquisition program. The objective of the program is to develop innovative science and technology to enhance future mine detection capabilities. The objective of the program is to develop innovative science and technology to enhance future mine detection capabilities. Prior to transition to acquisition, the COBRA ATD was extremely successful in demonstrating a passive airborne multispectral video sensor system operating in the tactical Pioneer unmanned aerial vehicle (UAV), combined with an integrated ground station subsystem to detect and locate minefields from surf zone to inland areas. JMDT is investigating advanced technology solutions for future enhancements in mine field detection capability beyond the current COBRA ATD demonstrated capabilities. JMDT has recently been delivered next- generation, innovative hardware which was specified by the Coastal System Station and developed under contract. This hardware includes an agile-tuning multispectral, polarimetric, digital video camera and advanced multi wavelength laser illumination technologies to extend the same sorts of multispectral detections from a UAV into the night and over shallow water and other difficult littoral regions. One of these illumination devices is an ultra- compact, highly-efficient near-IR laser diode array. The other is a multi-wavelength range-gateable laser. Additionally, in conjunction with this new technology, algorithm enhancements are being developed in JMDT for future naval capabilities which will outperform the already impressive record of automatic detection of minefields demonstrated by the COBAR ATD.

  16. ARAM: an automated image analysis software to determine rosetting parameters and parasitaemia in Plasmodium samples.

    PubMed

    Kudella, Patrick Wolfgang; Moll, Kirsten; Wahlgren, Mats; Wixforth, Achim; Westerhausen, Christoph

    2016-04-18

    Rosetting is associated with severe malaria and a primary cause of death in Plasmodium falciparum infections. Detailed understanding of this adhesive phenomenon may enable the development of new therapies interfering with rosette formation. For this, it is crucial to determine parameters such as rosetting and parasitaemia of laboratory strains or patient isolates, a bottleneck in malaria research due to the time consuming and error prone manual analysis of specimens. Here, the automated, free, stand-alone analysis software automated rosetting analyzer for micrographs (ARAM) to determine rosetting rate, rosette size distribution as well as parasitaemia with a convenient graphical user interface is presented. Automated rosetting analyzer for micrographs is an executable with two operation modes for automated identification of objects on images. The default mode detects red blood cells and fluorescently labelled parasitized red blood cells by combining an intensity-gradient with a threshold filter. The second mode determines object location and size distribution from a single contrast method. The obtained results are compared with standardized manual analysis. Automated rosetting analyzer for micrographs calculates statistical confidence probabilities for rosetting rate and parasitaemia. Automated rosetting analyzer for micrographs analyses 25 cell objects per second reliably delivering identical results compared to manual analysis. For the first time rosette size distribution is determined in a precise and quantitative manner employing ARAM in combination with established inhibition tests. Additionally ARAM measures the essential observables parasitaemia, rosetting rate and size as well as location of all detected objects and provides confidence intervals for the determined observables. No other existing software solution offers this range of function. The second, non-malaria specific, analysis mode of ARAM offers the functionality to detect arbitrary objects. Automated rosetting analyzer for micrographs has the capability to push malaria research to a more quantitative and statistically significant level with increased reliability due to operator independence. As an installation file for Windows © 7, 8.1 and 10 is available for free, ARAM offers a novel open and easy-to-use platform for the malaria community to elucidate resetting. © 7, 8.1 and 10 is available for free, ARAM offers a novel open and easy-to-use platform for the malaria community to elucidate rosetting.

  17. Exploration of Objective Functions for Optimal Placement of Weather Stations

    NASA Astrophysics Data System (ADS)

    Snyder, A.; Dietterich, T.; Selker, J. S.

    2016-12-01

    Many regions of Earth lack ground-based sensing of weather variables. For example, most countries in Sub-Saharan Africa do not have reliable weather station networks. This absence of sensor data has many consequences ranging from public safety (poor prediction and detection of severe weather events), to agriculture (lack of crop insurance), to science (reduced quality of world-wide weather forecasts, climate change measurement, etc.). The Trans-African Hydro-Meteorological Observatory (TAHMO.org) project seeks to address these problems by deploying and operating a large network of weather stations throughout Sub-Saharan Africa. To design the TAHMO network, we must determine where to locate each weather station. We can formulate this as the following optimization problem: Determine a set of N sites that jointly optimize the value of an objective function. The purpose of this poster is to propose and assess several objective functions. In addition to standard objectives (e.g., minimizing the summed squared error of interpolated values over the entire region), we consider objectives that minimize the maximum error over the region and objectives that optimize the detection of extreme events. An additional issue is that each station measures more than 10 variables—how should we balance the accuracy of our interpolated maps for each variable? Weather sensors inevitably drift out of calibration or fail altogether. How can we incorporate robustness to failed sensors into our network design? Another important requirement is that the network should make it possible to detect failed sensors by comparing their readings with those of other stations. How can this requirement be met? Finally, we provide an initial assessment of the computational cost of optimizing these various objective functions. We invite everyone to join the discussion at our poster by proposing additional objectives, identifying additional issues to consider, and expanding our bibliography of relevant papers. A prize (derived from grapes grown in Oregon) will be awarded for the most insightful contribution to the discussion!

  18. Determination of Flaw Type and Location Using an Expert Module in Ultrasonic Nondestructive Testing for Weld Inspection

    NASA Astrophysics Data System (ADS)

    Shahriari, D.; Zolfaghari, A.; Masoumi, F.

    2011-01-01

    Nondestructive evaluation is explained as nondestructive testing, nondestructive inspection, and nondestructive examination. It is a desire to determine some characteristic of the object or to determine whether the object contains irregularities, discontinuities, or flaws. Ultrasound based inspection techniques are used extensively throughout industry for detection of flaws in engineering materials. The range and variety of imperfections encountered is large, and critical assessment of location, size, orientation and type is often difficult. In addition, increasing quality requirements of new standards and codes of practice relating to fitness for purpose are placing higher demands on operators. Applying of an expert knowledge-based analysis in ultrasonic examination is a powerful tool that can help assure safety, quality, and reliability; increase productivity; decrease liability; and save money. In this research, an expert module system is coupled with ultrasonic examination (A-Scan Procedure) to determine and evaluate type and location of flaws that embedded during welding parts. The processing module of this expert system is implemented based on EN standard to classify welding defects, acceptance condition and measuring of their location via echo static pattern and image processing. The designed module introduces new system that can automate evaluating of the results of A-scan method according to EN standard. It can simultaneously recognize the number and type of defects, and determine flaw position during each scan.

  19. Discovery of B ring propellers in Cassini UVIS and ISS

    NASA Astrophysics Data System (ADS)

    Sremcevic, M.; Stewart, G.; Albers, N.; Esposito, L. W.

    2011-12-01

    One of the successes of the planetary ring theory has been the theoretical prediction of gravitational signatures of bodies embedded in the rings, and their subsequent detection in Cassini data. Bodies within the rings perturb the nearby ring material, and the orbital shear forms a two-armed structure -- dubbed a ``propeller'' -- which is centered at the embedded body. Although direct evidence of the present body or moonlet is still lacking, the observations of their propeller signatures has proved as an indispensable method to extend our knowledge about ring structure and dynamics. So far, propellers have been successfully detected within Saturn's A ring in two populations: a group of small and numerous propellers interior to the Encke gap forming belts, and by far less numerous but larger propellers exterior to Pan's orbit. Although there have been hints of propellers present within the B ring, or even C ring, their detection is less certain (e.g. neither has a single propeller been seen twice, nor has the ubiquitous two armed structure been observed). In this paper we present evidence for the existence of propellers in Saturn's B ring by combining data from Cassini Ultraviolet Imaging Spectrograph (UVIS) and Imaging Science Subsystem (ISS) experiments. A single object is observed for 5 years of Cassini data. The object is seen as a very elongated bright stripe (40 degrees wide) in unlit Cassini images, and dark stripe in lit geometries. In total we report observing the feature in images at 18 different epochs between 2005 and 2010. In UVIS occultations we observe the feature as an optical depth depletion in 14 out of 93 occultation cuts at corrotating longitudes compatible with imaging data. Combining the available Cassini data we infer that the object is a partial gap located at a=112,921km embedded in the high optical depth region of the B ring. The gap moves at Kepler speed appropriate for its radial location. Radial offsets of the gap locations in UVIS occultations are consistent with an asymmetric propeller shape. The asymmetry of the observed shape is most likely a consequence of the strong surface mass density gradient, as the feature is located at an edge between high and relatively low optical depth. From the radial separation of the propeller wings we estimate that the embedded body is about 1.5km in size. We estimate that there are possibly dozen up to 100 other propeller objects in Saturn's B ring. Since the discovered body sits at an edge of a dense ringlet within the B ring this suggests a novel mechanism for the up to now illusive B ring irregular shape of alternating high and low optical depth ringlets. We propose that the long standing search for the mechanism that maintains the B ring irregular shape may have its explanation in the presence of many embedded bodies that shepherd the individual B ring ringlets.

  20. Sexual Orientation and Spatial Position Effects on Selective Forms of Object Location Memory

    ERIC Educational Resources Information Center

    Rahman, Qazi; Newland, Cherie; Smyth, Beatrice Mary

    2011-01-01

    Prior research has demonstrated robust sex and sexual orientation-related differences in object location memory in humans. Here we show that this sexual variation may depend on the spatial position of target objects and the task-specific nature of the spatial array. We tested the recovery of object locations in three object arrays (object…

  1. VLSI processors for signal detection in SETI

    NASA Technical Reports Server (NTRS)

    Duluk, J. F.; Linscott, I. R.; Peterson, A. M.; Burr, J.; Ekroot, B.; Twicken, J.

    1989-01-01

    The objective of the Search for Extraterrestrial Intelligence (SETI) is to locate an artificially created signal coming from a distant star. This is done in two steps: (1) spectral analysis of an incoming radio frequency band, and (2) pattern detection for narrow-band signals. Both steps are computationally expensive and require the development of specially designed computer architectures. To reduce the size and cost of the SETI signal detection machine, two custom VLSI chips are under development. The first chip, the SETI DSP Engine, is used in the spectrum analyzer and is specially designed to compute Discrete Fourier Transforms (DFTs). It is a high-speed arithmetic processor that has two adders, one multiplier-accumulator, and three four-port memories. The second chip is a new type of Content-Addressable Memory. It is the heart of an associative processor that is used for pattern detection. Both chips incorporate many innovative circuits and architectural features.

  2. VLSI processors for signal detection in SETI.

    PubMed

    Duluk, J F; Linscott, I R; Peterson, A M; Burr, J; Ekroot, B; Twicken, J

    1989-01-01

    The objective of the Search for Extraterrestrial Intelligence (SETI) is to locate an artificially created signal coming from a distant star. This is done in two steps: (1) spectral analysis of an incoming radio frequency band, and (2) pattern detection for narrow-band signals. Both steps are computationally expensive and require the development of specially designed computer architectures. To reduce the size and cost of the SETI signal detection machine, two custom VLSI chips are under development. The first chip, the SETI DSP Engine, is used in the spectrum analyzer and is specially designed to compute Discrete Fourier Transforms (DFTs). It is a high-speed arithmetic processor that has two adders, one multiplier-accumulator, and three four-port memories. The second chip is a new type of Content-Addressable Memory. It is the heart of an associative processor that is used for pattern detection. Both chips incorporate many innovative circuits and architectural features.

  3. An e-mail survey identified unpublished studies for systematic reviews.

    PubMed

    Reveiz, Ludovic; Cardona, Andres Felipe; Ospina, Edgar Guillermo; de Agular, Sylvia

    2006-07-01

    A large number of trials remain difficult to locate or unpublished for systematic reviews. The objective of this article was to determine the usefulness of making e-mail contact with authors of clinical trials and literature reviews found in MEDLINE to identify unpublished or difficult to locate Randomized Controlled Trials (RCTs). A structured search for detecting RCTs in MEDLINE was made from January 1999 to June 2003; a questionnaire was sent to a random sample of 525 author's mails. Those RCTs obtained were sought in MEDLINE, EMBASE, the Cochrane Controlled Trials Register, LILACS, and ongoing registers. 40 (7.6%) replies were received; 10 previously undescribed and unpublished RCTs and 21 unregistered ongoing RCTs were found. The most frequently given reasons for not publishing were: lack of time for finalizing the statistical analysis and preparing the manuscript, contractual obligations with the pharmaceutical industry, methodologic errors in designing, and editorial rejection. Using the e-mails of authors detected by the search in electronic databases could contribute toward detecting potentially relevant ongoing or unpublished RCTs enabling rapid, straightforward, low-cost systematic review; in addition, the results of this study support the need of universal registration of all studies at their inception.

  4. Characterization of Acoustic Emission Parameters During Testing of Metal Liner Reinforced with Fully Resin Impregnated CNG Cylinder

    NASA Astrophysics Data System (ADS)

    Kenok, R.; Jomdecha, C.; Jirarungsatian, C.

    The aim of this paper is to study the acoustic emission (AE) parameters obtained from CNG cylinders during pressurization. AE from flaw propagation, material integrity, and pressuring of cylinder was the main objective for characterization. CNG cylinders of ISO 11439, resin fully wrapped type and metal liner type, were employed to test by hydrostatic stressing. The pressure was step increased until 1.1 time of operating pressure. Two AE sensors, resonance frequency of 150 kHz, were mounted on the cylinder wall to detect the AE throughout the testing. From the experiment results, AE can be detected from pressuring rate, material integrity, and flaw propagation from the cylinder wall. AE parameters including Amplitude, Count, Energy (MARSE), Duration and Rise time were analyzed to distinguish the AE data. The results show that the AE of flaw propagation was different in character from that of pressurization. Especially, AE detected from flaws of resin wrapped and metal liner was significantly different. To locate the flaw position, both the AE sensors can be accurately used to locate the flaw propagation in a linear pattern. The error was less than ±5 cm.

  5. A novel real-time health monitoring system for unmanned vehicles

    NASA Astrophysics Data System (ADS)

    Zhang, David C.; Ouyang, Lien; Qing, Peter; Li, Irene

    2008-04-01

    Real-time monitoring the status of in-service structures such as unmanned vehicles can provide invaluable information to detect the damages to the structures on time. The unmanned vehicles can be maintained and repaired in time if such damages are found. One typical cause of damages of unmanned vehicles is from impacts caused by bumping into some obstacles or being hit by some objects such as hostile fire. This paper introduces a novel impact event sensing system that can detect the location of the impact events and the force-time history of the impact events. The system consists of the Piezo-electric sensor network, the hardware platform and the analysis software. The new customized battery-powered impact event sensing system supports up to 64-channel parallel data acquisition. It features an innovative low-power hardware trigger circuit that monitors 64 channels simultaneously. The system is in the sleep mode most of the time. When an impact event happens, the system will wake up in micro-seconds and detect the impact location and corresponding force-time history. The system can be combined with the SMART sensing system to further evaluate the impact damage severity.

  6. AN OFF-CENTERED ACTIVE GALACTIC NUCLEUS IN NGC 3115

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menezes, R. B.; Steiner, J. E.; Ricci, T. V., E-mail: robertobm@astro.iag.usp.br

    2014-11-20

    NGC 3115 is an S0 galaxy that has always been considered to have a pure absorption-line spectrum. Some recent studies have detected a compact radio-emitting nucleus in this object, coinciding with the photometric center and with a candidate for the X-ray nucleus. This is evidence of the existence of a low-luminosity active galactic nucleus (AGN) in the galaxy, although no emission line has ever been observed. We report the detection of an emission-line spectrum of a type 1 AGN in NGC 3115, with an Hα luminosity of L {sub Hα} = (4.2 ± 0.4) × 10{sup 37} erg s{sup –1}. Our analysismore » revealed that this AGN is located at a projected distance of ∼0.''29 ± 0.''05 (corresponding to ∼14.3 ± 2.5 pc) from the stellar bulge center, which is coincident with the kinematic center of this object's stellar velocity map. The black hole corresponding to the observed off-centered AGN may form a binary system with a black hole located at the stellar bulge center. However, it is also possible that the displaced black hole is the merged remnant of the binary system coalescence, after the ''kick'' caused by the asymmetric emission of gravitational waves. We propose that certain features in the stellar velocity dispersion map are the result of perturbations caused by the off-centered AGN.« less

  7. Higher Level Visual Cortex Represents Retinotopic, Not Spatiotopic, Object Location

    PubMed Central

    Kanwisher, Nancy

    2012-01-01

    The crux of vision is to identify objects and determine their locations in the environment. Although initial visual representations are necessarily retinotopic (eye centered), interaction with the real world requires spatiotopic (absolute) location information. We asked whether higher level human visual cortex—important for stable object recognition and action—contains information about retinotopic and/or spatiotopic object position. Using functional magnetic resonance imaging multivariate pattern analysis techniques, we found information about both object category and object location in each of the ventral, dorsal, and early visual regions tested, replicating previous reports. By manipulating fixation position and stimulus position, we then tested whether these location representations were retinotopic or spatiotopic. Crucially, all location information was purely retinotopic. This pattern persisted when location information was irrelevant to the task, and even when spatiotopic (not retinotopic) stimulus position was explicitly emphasized. We also conducted a “searchlight” analysis across our entire scanned volume to explore additional cortex but again found predominantly retinotopic representations. The lack of explicit spatiotopic representations suggests that spatiotopic object position may instead be computed indirectly and continually reconstructed with each eye movement. Thus, despite our subjective impression that visual information is spatiotopic, even in higher level visual cortex, object location continues to be represented in retinotopic coordinates. PMID:22190434

  8. Enhancing source location protection in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Chen, Juan; Lin, Zhengkui; Wu, Di; Wang, Bailing

    2015-12-01

    Wireless sensor networks are widely deployed in the internet of things to monitor valuable objects. Once the object is monitored, the sensor nearest to the object which is known as the source informs the base station about the object's information periodically. It is obvious that attackers can capture the object successfully by localizing the source. Thus, many protocols have been proposed to secure the source location. However, in this paper, we examine that typical source location protection protocols generate not only near but also highly localized phantom locations. As a result, attackers can trace the source easily from these phantom locations. To address these limitations, we propose a protocol to enhance the source location protection (SLE). With phantom locations far away from the source and widely distributed, SLE improves source location anonymity significantly. Theory analysis and simulation results show that our SLE provides strong source location privacy preservation and the average safety period increases by nearly one order of magnitude compared with existing work with low communication cost.

  9. Spatiotemporal distribution of location and object effects in the electromyographic activity of upper extremity muscles during reach-to-grasp

    PubMed Central

    Rouse, Adam G.

    2016-01-01

    In reaching to grasp an object, proximal muscles that act on the shoulder and elbow classically have been viewed as transporting the hand to the intended location, while distal muscles that act on the fingers simultaneously shape the hand to grasp the object. Prior studies of electromyographic (EMG) activity in upper extremity muscles therefore have focused, by and large, either on proximal muscle activity during reaching to different locations or on distal muscle activity as the subject grasps various objects. Here, we examined the EMG activity of muscles from the shoulder to the hand, as monkeys reached and grasped in a task that dissociated location and object. We quantified the extent to which variation in the EMG activity of each muscle depended on location, on object, and on their interaction—all as a function of time. Although EMG variation depended on both location and object beginning early in the movement, an early phase of substantial location effects in muscles from proximal to distal was followed by a later phase in which object effects predominated throughout the extremity. Interaction effects remained relatively small. Our findings indicate that neural control of reach-to-grasp may occur largely in two sequential phases: the first, serving to project the entire upper extremity toward the intended location, and the second, acting predominantly to shape the entire extremity for grasping the object. PMID:27009156

  10. Object formation in visual working memory: Evidence from object-based attention.

    PubMed

    Zhou, Jifan; Zhang, Haihang; Ding, Xiaowei; Shui, Rende; Shen, Mowei

    2016-09-01

    We report on how visual working memory (VWM) forms intact perceptual representations of visual objects using sub-object elements. Specifically, when objects were divided into fragments and sequentially encoded into VWM, the fragments were involuntarily integrated into objects in VWM, as evidenced by the occurrence of both positive and negative object-based attention effects: In Experiment 1, when subjects' attention was cued to a location occupied by the VWM object, the target presented at the location of that object was perceived as occurring earlier than that presented at the location of a different object. In Experiment 2, responses to a target were significantly slower when a distractor was presented at the same location as the cued object (Experiment 2). These results suggest that object fragments can be integrated into objects within VWM in a manner similar to that of visual perception. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Attribute conjunctions and the part configuration advantage in object category learning.

    PubMed

    Saiki, J; Hummel, J E

    1996-07-01

    Five experiments demonstrated that in object category learning people are particularly sensitive to conjunctions of part shapes and relative locations. Participants learned categories defined by a part's shape and color (part-color conjunctions) or by a part's shape and its location relative to another part (part-location conjunctions). The statistical properties of the categories were identical across these conditions, as were the salience of color and relative location. Participants were better at classifying objects defined by part-location conjunctions than objects defined by part-color conjunctions. Subsequent experiments revealed that this effect was not due to the specific color manipulation or the role of location per se. These results suggest that the shape bias in object categorization is at least partly due to sensitivity to part-location conjunctions and suggest a new processing constraint on category learning.

  12. OH masers towards IRAS 19092+0841

    NASA Astrophysics Data System (ADS)

    Edris, K. A.; Fuller, G. A.; Etoka, S.; Cohen, R. J.

    2017-12-01

    Context. Maser emission is a strong tool for studying high-mass star-forming regions and their evolutionary stages. OH masers in particular can trace the circumstellar material around protostars and determine their magnetic field strengths at milliarcsecond resolution. Aims: We seek to image OH maser emission towards high-mass protostellar objects to determine their evolutionary stages and to locate the detected maser emission in the process of high-mass star formation. Methods: In 2007, we surveyed OH maser emission towards 217 high-mass protostellar objects to study its presence. In this paper, we present follow-up MERLIN observations of a ground-state OH maser emission towards one of these objects, IRAS 19092+0841. Results: We detect emissions from the two OH main spectral lines, 1665 and 1667 MHz, close to the central object. We determine the positions and velocities of the OH maser features. The masers are distributed over a region of 5'' corresponding to 22 400 AU (or 0.1 pc) at a distance of 4.48 kpc. The polarization properties of the OH maser features are determined as well. We identify three Zeeman pairs from which we inferred a magnetic field strength of 4.4 mG pointing towards the observer. Conclusions: The relatively small velocity spread and relatively wide spacial distribution of the OH maser features support the suggestion that this object could be in an early evolutionary state before the presence of disk, jets or outflows.

  13. Feature-location binding in 3D: Feature judgments are biased by 2D location but not position-in-depth

    PubMed Central

    Finlayson, Nonie J.; Golomb, Julie D.

    2016-01-01

    A fundamental aspect of human visual perception is the ability to recognize and locate objects in the environment. Importantly, our environment is predominantly three-dimensional (3D), but while there is considerable research exploring the binding of object features and location, it is unknown how depth information interacts with features in the object binding process. A recent paradigm called the spatial congruency bias demonstrated that 2D location is fundamentally bound to object features (Golomb, Kupitz, & Thiemann, 2014), such that irrelevant location information biases judgments of object features, but irrelevant feature information does not bias judgments of location or other features. Here, using the spatial congruency bias paradigm, we asked whether depth is processed as another type of location, or more like other features. We initially found that depth cued by binocular disparity biased judgments of object color. However, this result seemed to be driven more by the disparity differences than the depth percept: Depth cued by occlusion and size did not bias color judgments, whereas vertical disparity information (with no depth percept) did bias color judgments. Our results suggest that despite the 3D nature of our visual environment, only 2D location information – not position-in-depth – seems to be automatically bound to object features, with depth information processed more similarly to other features than to 2D location. PMID:27468654

  14. Feature-location binding in 3D: Feature judgments are biased by 2D location but not position-in-depth.

    PubMed

    Finlayson, Nonie J; Golomb, Julie D

    2016-10-01

    A fundamental aspect of human visual perception is the ability to recognize and locate objects in the environment. Importantly, our environment is predominantly three-dimensional (3D), but while there is considerable research exploring the binding of object features and location, it is unknown how depth information interacts with features in the object binding process. A recent paradigm called the spatial congruency bias demonstrated that 2D location is fundamentally bound to object features, such that irrelevant location information biases judgments of object features, but irrelevant feature information does not bias judgments of location or other features. Here, using the spatial congruency bias paradigm, we asked whether depth is processed as another type of location, or more like other features. We initially found that depth cued by binocular disparity biased judgments of object color. However, this result seemed to be driven more by the disparity differences than the depth percept: Depth cued by occlusion and size did not bias color judgments, whereas vertical disparity information (with no depth percept) did bias color judgments. Our results suggest that despite the 3D nature of our visual environment, only 2D location information - not position-in-depth - seems to be automatically bound to object features, with depth information processed more similarly to other features than to 2D location. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Very high energy gamma-ray emission detected from PKS 1440-389 with H.E.S.S.

    NASA Astrophysics Data System (ADS)

    Hofmann, W.

    2012-04-01

    The BL Lac object PKS 1440-389, located at a tentative redshift of z=0.065 (6dF Galaxy Survey, Jones, D.H. et al. MNRAS 355, 747-763, 2004), has been reported as a hard (G=1.75+/-0.05), bright, and steady extragalactic source at GeV energies in the Fermi-LAT catalogue (2FGL J1443.9-3908, P.L. Nolan et al., 2012, ApJS, 199, 31). The extrapolation of the Fermi-LAT spectrum to very high energies (VHE; E> 100 GeV), together with its brightness in the radio and X-ray bands, makes this BL Lac object a good candidate for VHE emission.

  16. Detecting special nuclear material using muon-induced neutron emission

    NASA Astrophysics Data System (ADS)

    Guardincerri, Elena; Bacon, Jeffrey; Borozdin, Konstantin; Matthew Durham, J.; Fabritius, Joseph, II; Hecht, Adam; Milner, Edward C.; Miyadera, Haruo; Morris, Christopher L.; Perry, John; Poulson, Daniel

    2015-07-01

    The penetrating ability of cosmic ray muons makes them an attractive probe for imaging dense materials. Here, we describe experimental results from a new technique that uses neutrons generated by cosmic-ray muons to identify the presence of special nuclear material (SNM). Neutrons emitted from SNM are used to tag muon-induced fission events in actinides and laminography is used to form images of the stopping material. This technique allows the imaging of SNM-bearing objects tagged using muon tracking detectors located above or to the side of the objects, and may have potential applications in warhead verification scenarios. During the experiment described here we did not attempt to distinguish the type or grade of the SNM.

  17. Human face detection using motion and color information

    NASA Astrophysics Data System (ADS)

    Kim, Yang-Gyun; Bang, Man-Won; Park, Soon-Young; Choi, Kyoung-Ho; Hwang, Jeong-Hyun

    2008-02-01

    In this paper, we present a hardware implementation of a face detector for surveillance applications. To come up with a computationally cheap and fast algorithm with minimal memory requirement, motion and skin color information are fused successfully. More specifically, a newly appeared object is extracted first by comparing average Hue and Saturation values of background image and a current image. Then, the result of skin color filtering of the current image is combined with the result of a newly appeared object. Finally, labeling is performed to locate a true face region. The proposed system is implemented on Altera Cyclone2 using Quartus II 6.1 and ModelSim 6.1. For hardware description language (HDL), Verilog-HDL is used.

  18. Using an auditory sensory substitution device to augment vision: evidence from eye movements.

    PubMed

    Wright, Thomas D; Margolis, Aaron; Ward, Jamie

    2015-03-01

    Sensory substitution devices convert information normally associated with one sense into another sense (e.g. converting vision into sound). This is often done to compensate for an impaired sense. The present research uses a multimodal approach in which both natural vision and sound-from-vision ('soundscapes') are simultaneously presented. Although there is a systematic correspondence between what is seen and what is heard, we introduce a local discrepancy between the signals (the presence of a target object that is heard but not seen) that the participant is required to locate. In addition to behavioural responses, the participants' gaze is monitored with eye-tracking. Although the target object is only presented in the auditory channel, behavioural performance is enhanced when visual information relating to the non-target background is presented. In this instance, vision may be used to generate predictions about the soundscape that enhances the ability to detect the hidden auditory object. The eye-tracking data reveal that participants look for longer in the quadrant containing the auditory target even when they subsequently judge it to be located elsewhere. As such, eye movements generated by soundscapes reveal the knowledge of the target location that does not necessarily correspond to the actual judgment made. The results provide a proof of principle that multimodal sensory substitution may be of benefit to visually impaired people with some residual vision and, in normally sighted participants, for guiding search within complex scenes.

  19. Visual performance for trip hazard detection when using incandescent and led miner cap lamps.

    PubMed

    Sammarco, John J; Gallagher, Sean; Reyes, Miguel

    2010-04-01

    Accident data for 2003-2007 indicate that slip, trip, and falls (STFs) are the second leading accident class (17.8%, n=2,441) of lost-time injuries in underground mining. Proper lighting plays a critical role in enabling miners to detect STF hazards in this environment. Often, the only lighting available to the miner is from a cap lamp worn on the miner's helmet. The focus of this research was to determine if the spectral content of light from light-emitting diode (LED) cap lamps enabled visual performance improvements for the detection of tripping hazards as compared to incandescent cap lamps that are traditionally used in underground mining. A secondary objective was to determine the effects of aging on visual performance. The visual performance of 30 subjects was quantified by measuring each subject's speed and accuracy in detecting objects positioned on the floor both in the near field, at 1.83 meters, and far field, at 3.66 meters. Near field objects were positioned at 0 degrees and +/-20 degrees off axis, while far field objects were positioned at 0 degrees and +/-10 degrees off axis. Three age groups were designated: group A consisted of subjects 18 to 25 years old, group B consisted of subjects 40 to 50 years old, and group C consisted of subjects 51 years and older. Results of the visual performance comparison for a commercially available LED, a prototype LED, and an incandescent cap lamp indicate that the location of objects on the floor, the type of cap lamp used, and subject age all had significant influences on the time required to identify potential trip hazards. The LED-based cap lamps enabled detection times that were an average of 0.96 seconds faster compared to the incandescent cap lamp. Use of the LED cap lamps resulted in average detection times that were about 13.6% faster than those recorded for the incandescent cap lamp. The visual performance differences between the commercially available LED and prototype LED cap lamp were not statistically significant. It can be inferred from this data that the spectral content from LED-based cap lamps could enable significant visual performance improvements for miners in the detection of trip hazards. Published by Elsevier Ltd.

  20. Self-initiated object-location memory in young and older adults.

    PubMed

    Berger-Mandelbaum, Anat; Magen, Hagit

    2017-11-20

    The present study explored self-initiated object-location memory in ecological contexts, as aspect of memory that is largely absent from the research literature. Young and older adults memorized objects-location associations they selected themselves or object-location associations provided to them, and elaborated on the strategy they used when selecting the locations themselves. Retrieval took place 30 min and 1 month after encoding. The results showed an age-related decline in self-initiated and provided object-location memory. Older adults benefited from self-initiation more than young adults when tested after 30 min, while the benefit was equal when tested after 1 month. Furthermore, elaboration enhanced memory only in older adults, and only after 30 min. Both age groups used deep encoding strategies on the majority of the trials, but their percentage was lower in older adults. Overall, the study demonstrated the processes involved in self-initiated object-location memory, which is an essential part of everyday functioning.

  1. Environmental asbestos exposure sources in Korea

    PubMed Central

    2016-01-01

    Background Because of the long asbestos-related disease latencies (10–50 years), detection, diagnosis, and epidemiologic studies require asbestos exposure history. However, environmental asbestos exposure source (EAES) data are lacking. Objectives To survey the available data for past EAES and supplement these data with interviews. Methods We constructed an EAES database using a literature review and interviews of experts, former traders, and workers. Exposure sources by time period and type were visualized using a geographic information system (ArcGIS), web-based mapping (Google Maps), and OpenWeatherMap. The data were mounted in the GIS to show the exposure source location and trend. Results The majority of asbestos mines, factories, and consumption was located in Chungnam; Gyeonggi, Busan, and Gyeongnam; and Gyeonggi, Daejeon, and Busan, respectively. Shipbuilding and repair companies were mostly located in Busan and Gyeongnam. Conclusions These tools might help evaluate past exposure from EAES and estimate the future asbestos burden in Korea. PMID:27726756

  2. Planck 2015 results: XXVI. The Second Planck Catalogue of Compact Sources

    DOE PAGES

    Ade, P. A. R.; Aghanim, N.; Argüeso, F.; ...

    2016-09-20

    The Second Planck Catalogue of Compact Sources is a list of discrete objects detected in single-frequency maps from the full duration of the Planck mission and supersedes previous versions. Also, it consists of compact sources, both Galactic and extragalactic, detected over the entire sky. Compact sources detected in the lower frequency channels are assigned to the PCCS2, while at higher frequencies they are assigned to one of two subcatalogues, the PCCS2 or PCCS2E, depending on their location on the sky. The first of these (PCCS2) covers most of the sky and allows the user to produce subsamples at higher reliabilitiesmore » than the target 80% integral reliability of the catalogue. The second (PCCS2E) contains sources detected in sky regions where the diffuse emission makes it difficult to quantify the reliability of the detections. Both the PCCS2 and PCCS2E include polarization measurements, in the form of polarized flux densities, or upper limits, and orientation angles for all seven polarization-sensitive Planck channels. Finally, the improved data-processing of the full-mission maps and their reduced noise levels allow us to increase the number of objects in the catalogue, improving its completeness for the target 80% reliability as compared with the previous versions, the PCCS and the Early Release Compact Source Catalogue (ERCSC).« less

  3. Planck 2015 results: XXVI. The Second Planck Catalogue of Compact Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ade, P. A. R.; Aghanim, N.; Argüeso, F.

    The Second Planck Catalogue of Compact Sources is a list of discrete objects detected in single-frequency maps from the full duration of the Planck mission and supersedes previous versions. Also, it consists of compact sources, both Galactic and extragalactic, detected over the entire sky. Compact sources detected in the lower frequency channels are assigned to the PCCS2, while at higher frequencies they are assigned to one of two subcatalogues, the PCCS2 or PCCS2E, depending on their location on the sky. The first of these (PCCS2) covers most of the sky and allows the user to produce subsamples at higher reliabilitiesmore » than the target 80% integral reliability of the catalogue. The second (PCCS2E) contains sources detected in sky regions where the diffuse emission makes it difficult to quantify the reliability of the detections. Both the PCCS2 and PCCS2E include polarization measurements, in the form of polarized flux densities, or upper limits, and orientation angles for all seven polarization-sensitive Planck channels. Finally, the improved data-processing of the full-mission maps and their reduced noise levels allow us to increase the number of objects in the catalogue, improving its completeness for the target 80% reliability as compared with the previous versions, the PCCS and the Early Release Compact Source Catalogue (ERCSC).« less

  4. A Passive Learning Sensor Architecture for Multimodal Image Labeling: An Application for Social Robots.

    PubMed

    Gutiérrez, Marco A; Manso, Luis J; Pandya, Harit; Núñez, Pedro

    2017-02-11

    Object detection and classification have countless applications in human-robot interacting systems. It is a necessary skill for autonomous robots that perform tasks in household scenarios. Despite the great advances in deep learning and computer vision, social robots performing non-trivial tasks usually spend most of their time finding and modeling objects. Working in real scenarios means dealing with constant environment changes and relatively low-quality sensor data due to the distance at which objects are often found. Ambient intelligence systems equipped with different sensors can also benefit from the ability to find objects, enabling them to inform humans about their location. For these applications to succeed, systems need to detect the objects that may potentially contain other objects, working with relatively low-resolution sensor data. A passive learning architecture for sensors has been designed in order to take advantage of multimodal information, obtained using an RGB-D camera and trained semantic language models. The main contribution of the architecture lies in the improvement of the performance of the sensor under conditions of low resolution and high light variations using a combination of image labeling and word semantics. The tests performed on each of the stages of the architecture compare this solution with current research labeling techniques for the application of an autonomous social robot working in an apartment. The results obtained demonstrate that the proposed sensor architecture outperforms state-of-the-art approaches.

  5. 3D-Web-GIS RFID location sensing system for construction objects.

    PubMed

    Ko, Chien-Ho

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency.

  6. 3D-Web-GIS RFID Location Sensing System for Construction Objects

    PubMed Central

    2013-01-01

    Construction site managers could benefit from being able to visualize on-site construction objects. Radio frequency identification (RFID) technology has been shown to improve the efficiency of construction object management. The objective of this study is to develop a 3D-Web-GIS RFID location sensing system for construction objects. An RFID 3D location sensing algorithm combining Simulated Annealing (SA) and a gradient descent method is proposed to determine target object location. In the algorithm, SA is used to stabilize the search process and the gradient descent method is used to reduce errors. The locations of the analyzed objects are visualized using the 3D-Web-GIS system. A real construction site is used to validate the applicability of the proposed method, with results indicating that the proposed approach can provide faster, more accurate, and more stable 3D positioning results than other location sensing algorithms. The proposed system allows construction managers to better understand worksite status, thus enhancing managerial efficiency. PMID:23864821

  7. Detection and Purging of Specular Reflective and Transparent Object Influences in 3d Range Measurements

    NASA Astrophysics Data System (ADS)

    Koch, R.; May, S.; Nüchter, A.

    2017-02-01

    3D laser scanners are favoured sensors for mapping in mobile service robotics at indoor and outdoor applications, since they deliver precise measurements at a wide scanning range. The resulting maps are detailed since they have a high resolution. Based on these maps robots navigate through rough terrain, fulfil advanced manipulation, and inspection tasks. In case of specular reflective and transparent objects, e.g., mirrors, windows, shiny metals, the laser measurements get corrupted. Based on the type of object and the incident angle of the incoming laser beam there are three results possible: a measurement point on the object plane, a measurement behind the object plane, and a measurement of a reflected object. It is important to detect such situations to be able to handle these corrupted points. This paper describes why it is difficult to distinguish between specular reflective and transparent surfaces. It presents a 3DReflection- Pre-Filter Approach to identify specular reflective and transparent objects in point clouds of a multi-echo laser scanner. Furthermore, it filters point clouds from influences of such objects and extract the object properties for further investigations. Based on an Iterative-Closest-Point-algorithm reflective objects are identified. Object surfaces and points behind surfaces are masked according to their location. Finally, the processed point cloud is forwarded to a mapping module. Furthermore, the object surface corners and the type of the surface is broadcasted. Four experiments demonstrate the usability of the 3D-Reflection-Pre-Filter. The first experiment was made in a empty room containing a mirror, the second experiment was made in a stairway containing a glass door, the third experiment was made in a empty room containing two mirrors, the fourth experiment was made in an office room containing a mirror. This paper demonstrate that for single scans the detection of specular reflective and transparent objects in 3D is possible. It is more reliable in 3D as in 2D. Nevertheless, collect the data of multiple scans and post-filter them as soon as the object was bypassed should pursued. This is why future work concentrates on implementing a post-filter module. Besides, it is the aim to improve the discrimination between specular reflective and transparent objects.

  8. Intrinsic and contextual features in object recognition.

    PubMed

    Schlangen, Derrick; Barenholtz, Elan

    2015-01-28

    The context in which an object is found can facilitate its recognition. Yet, it is not known how effective this contextual information is relative to the object's intrinsic visual features, such as color and shape. To address this, we performed four experiments using rendered scenes with novel objects. In each experiment, participants first performed a visual search task, searching for a uniquely shaped target object whose color and location within the scene was experimentally manipulated. We then tested participants' tendency to use their knowledge of the location and color information in an identification task when the objects' images were degraded due to blurring, thus eliminating the shape information. In Experiment 1, we found that, in the absence of any diagnostic intrinsic features, participants identified objects based purely on their locations within the scene. In Experiment 2, we found that participants combined an intrinsic feature, color, with contextual location in order to uniquely specify an object. In Experiment 3, we found that when an object's color and location information were in conflict, participants identified the object using both sources of information equally. Finally, in Experiment 4, we found that participants used whichever source of information-either color or location-was more statistically reliable in order to identify the target object. Overall, these experiments show that the context in which objects are found can play as important a role as intrinsic features in identifying the objects. © 2015 ARVO.

  9. On the possibility of ground-based direct imaging detection of extra-solar planets: the case of TWA-7

    NASA Astrophysics Data System (ADS)

    Neuhäuser, R.; Brandner, W.; Eckart, A.; Guenther, E.; Alves, J.; Ott, T.; Huélamo, N.; Fernández, M.

    2000-02-01

    We show that ground-based direct imaging detection of extra-solar planets is possible with current technology. As an example, we present evidence for a possible planetary companion to the young T Tauri star 1RXSJ104230.3-334014 (=TWA-7), discovered by ROSAT as a member of the nearby TW Hya association. In an HST NICMOS F160W image, an object is detected that is more than 9 mag fainter than TWA-7, located 2.445 +/- 0.035'' south-east at a position angle of 142.24 +/- 1.34deg. One year later using the ESO-NTT with the SHARP speckle camera, we obtained H- and K-band detections of this faint object at a separation of 2.536 +/- 0.077'' and a position angle of 139.3 +/- 2.1deg. Given the known proper motion of TWA-7, the pair may form a proper motion pair. If the faint object orbits TWA-7, then its apparent magnitudes of H=16.42 +/- 0.11 and K=16.34 +/- 0.15 mag yield absolute magnitudes consistent with a ~ 106.5 yr old ~ 3 M_jup mass object according to the non-gray theory by Burrows et al. (1997). At ~ 55 pc, the angular separation of ~ 2.5'' corresponds to ~ 138 AU, clearly within typical disk sizes. However, position angles and separations are slightly more consistent with a background object than with a companion. Based on observations obtained at the European Southern Observatory, La Silla (ESO Proposals 62.I-0418 and 63.N-0178), and on observations made with the NASA/ESA Hubble Space Telescope, obtained from the data archive at the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under the NASA contract NAS 5-26555.

  10. The HST/ACS Coma Cluster Survey. II. Data Description and Source Catalogs

    NASA Technical Reports Server (NTRS)

    Hammer, Derek; Kleijn, Gijs Verdoes; Hoyos, Carlos; Den Brok, Mark; Balcells, Marc; Ferguson, Henry C.; Goudfrooij, Paul; Carter, David; Guzman, Rafael; Peletier, Reynier F.; hide

    2010-01-01

    The Coma cluster, Abell 1656, was the target of a HST-ACS Treasury program designed for deep imaging in the F475W and F814W passbands. Although our survey was interrupted by the ACS instrument failure in early 2007, the partially-completed survey still covers approximately 50% of the core high density region in Coma. Observations were performed for twenty-five fields with a total coverage area of 274 aremin(sup 2), and extend over a wide range of cluster-centric radii (approximately 1.75 Mpe or 1 deg). The majority of the fields are located near the core region of Coma (19/25 pointings) with six additional fields in the south-west region of the cluster. In this paper we present SEXTRACTOR source catalogs generated from the processed images, including a detailed description of the methodology used for object detection and photometry, the subtraction of bright galaxies to measure faint underlying objects, and the use of simulations to assess the photometric accuracy and completeness of our catalogs. We also use simulations to perform aperture corrections for the SEXTRACTOR Kron magnitudes based only on the measured source flux and its half-light radius. We have performed photometry for 76,000 objects that consist of roughly equal numbers of extended galaxies and unresolved objects. Approximately two-thirds of all detections are brighter than F814W=26.5 mag (AB), which corresponds to the 10sigma, point-source detection limit. We estimate that Coma members are 5-10% of the source detections, including a large population of compact objects (primarily GCs, but also cEs and UCDs), and a wide variety of extended galaxies from cD galaxies to dwarf low surface brightness galaxies. The initial data release for the HST-ACS Coma Treasury program was made available to the public in August 2008. The images and catalogs described in this study relate to our second data release.

  11. Automated Terrestrial EMI Emitter Detection, Classification, and Localization

    NASA Astrophysics Data System (ADS)

    Stottler, R.; Ong, J.; Gioia, C.; Bowman, C.; Bhopale, A.

    Clear operating spectrum at ground station antenna locations is critically important for communicating with, commanding, controlling, and maintaining the health of satellites. Electro Magnetic Interference (EMI) can interfere with these communications, so it is extremely important to track down and eliminate sources of EMI. The Terrestrial RFI-locating Automation with CasE based Reasoning (TRACER) system is being implemented to automate terrestrial EMI emitter localization and identification to improve space situational awareness, reduce manpower requirements, dramatically shorten EMI response time, enable the system to evolve without programmer involvement, and support adversarial scenarios such as jamming. The operational version of TRACER is being implemented and applied with real data (power versus frequency over time) for both satellite communication antennas and sweeping Direction Finding (DF) antennas located near them. This paper presents the design and initial implementation of TRACER’s investigation data management, automation, and data visualization capabilities. TRACER monitors DF antenna signals and detects and classifies EMI using neural network technology, trained on past cases of both normal communications and EMI events. When EMI events are detected, an Investigation Object is created automatically. The user interface facilitates the management of multiple investigations simultaneously. Using a variant of the Friis transmission equation, emissions data is used to estimate and plot the emitter’s locations over time for comparison with current flights. The data is also displayed on a set of five linked graphs to aid in the perception of patterns spanning power, time, frequency, and bearing. Based on details of the signal (its classification, direction, and strength, etc.), TRACER retrieves one or more cases of EMI investigation methodologies which are represented as graphical behavior transition networks (BTNs). These BTNs can be edited easily, and they naturally represent the flow-chart-like process often followed by experts in time pressured situations.

  12. Geophysical Technologies to Image Old Mine Works

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanaan Hanna; Jim Pfeiffer

    2007-01-15

    ZapataEngineering, Blackhawk Division performed geophysical void detection demonstrations for the US Department of Labor Mine Safety and Health Administration (MSHA). The objective was to advance current state-of-practices of geophysical technologies for detecting underground mine voids. The presence of old mine works above, adjacent, or below an active mine presents major health and safety hazards to miners who have inadvertently cut into locations with such features. In addition, the presence of abandoned mines or voids beneath roadways and highway structures may greatly impact the performance of the transportation infrastructure in terms of cost and public safety. Roads constructed over abandoned minesmore » are subject to potential differential settlement, subsidence, sinkholes, and/or catastrophic collapse. Thus, there is a need to utilize geophysical imaging technologies to accurately locate old mine works. Several surface and borehole geophysical imaging methods and mapping techniques were employed at a known abandoned coal mine in eastern Illinois to investigate which method best map the location and extent of old works. These methods included: 1) high-resolution seismic (HRS) using compressional P-wave (HRPW) and S-wave (HRSW) reflection collected with 3-D techniques; 2) crosshole seismic tomography (XHT); 3) guided waves; 4) reverse vertical seismic profiling (RVSP); and 5) borehole sonar mapping. In addition, several exploration borings were drilled to confirm the presence of the imaged mine voids. The results indicated that the RVSP is the most viable method to accurately detect the subsurface voids with horizontal accuracy of two to five feet. This method was then applied at several other locations in Colorado with various topographic, geologic, and cultural settings for the same purpose. This paper presents the significant results obtained from the geophysical investigations in Illinois.« less

  13. Landslide susceptibility mapping using decision-tree based CHi-squared automatic interaction detection (CHAID) and Logistic regression (LR) integration

    NASA Astrophysics Data System (ADS)

    Althuwaynee, Omar F.; Pradhan, Biswajeet; Ahmad, Noordin

    2014-06-01

    This article uses methodology based on chi-squared automatic interaction detection (CHAID), as a multivariate method that has an automatic classification capacity to analyse large numbers of landslide conditioning factors. This new algorithm was developed to overcome the subjectivity of the manual categorization of scale data of landslide conditioning factors, and to predict rainfall-induced susceptibility map in Kuala Lumpur city and surrounding areas using geographic information system (GIS). The main objective of this article is to use CHi-squared automatic interaction detection (CHAID) method to perform the best classification fit for each conditioning factor, then, combining it with logistic regression (LR). LR model was used to find the corresponding coefficients of best fitting function that assess the optimal terminal nodes. A cluster pattern of landslide locations was extracted in previous study using nearest neighbor index (NNI), which were then used to identify the clustered landslide locations range. Clustered locations were used as model training data with 14 landslide conditioning factors such as; topographic derived parameters, lithology, NDVI, land use and land cover maps. Pearson chi-squared value was used to find the best classification fit between the dependent variable and conditioning factors. Finally the relationship between conditioning factors were assessed and the landslide susceptibility map (LSM) was produced. An area under the curve (AUC) was used to test the model reliability and prediction capability with the training and validation landslide locations respectively. This study proved the efficiency and reliability of decision tree (DT) model in landslide susceptibility mapping. Also it provided a valuable scientific basis for spatial decision making in planning and urban management studies.

  14. Directed area search using socio-biological vision algorithms and cognitive Bayesian reasoning

    NASA Astrophysics Data System (ADS)

    Medasani, S.; Owechko, Y.; Allen, D.; Lu, T. C.; Khosla, D.

    2010-04-01

    Volitional search systems that assist the analyst by searching for specific targets or objects such as vehicles, factories, airports, etc in wide area overhead imagery need to overcome multiple problems present in current manual and automatic approaches. These problems include finding targets hidden in terabytes of information, relatively few pixels on targets, long intervals between interesting regions, time consuming analysis requiring many analysts, no a priori representative examples or templates of interest, detecting multiple classes of objects, and the need for very high detection rates and very low false alarm rates. This paper describes a conceptual analyst-centric framework that utilizes existing technology modules to search and locate occurrences of targets of interest (e.g., buildings, mobile targets of military significance, factories, nuclear plants, etc.), from video imagery of large areas. Our framework takes simple queries from the analyst and finds the queried targets with relatively minimum interaction from the analyst. It uses a hybrid approach that combines biologically inspired bottom up attention, socio-biologically inspired object recognition for volitionally recognizing targets, and hierarchical Bayesian networks for modeling and representing the domain knowledge. This approach has the benefits of high accuracy, low false alarm rate and can handle both low-level visual information and high-level domain knowledge in a single framework. Such a system would be of immense help for search and rescue efforts, intelligence gathering, change detection systems, and other surveillance systems.

  15. A companion candidate in the gap of the T Chamaeleontis transitional disk

    NASA Astrophysics Data System (ADS)

    Huélamo, N.; Lacour, S.; Tuthill, P.; Ireland, M.; Kraus, A.; Chauvin, G.

    2011-04-01

    Context. T Cha is a young star surrounded by a cold disk. The presence of a gap within its disk, inferred from fitting to the spectral energy distribution, has suggested on-going planetary formation. Aims: The aim of this work is to look for very low-mass companions within the disk gap of T Cha. Methods: We observed T Cha in L' and Ks with NAOS-CONICA, the adaptive optics system at the VLT, using sparse aperture masking. Results: We detected a source in the L' data at a separation of 62 ± 7 mas, position angle of ~78 ± 1 degrees, and a contrast of ΔL' = 5.1 ± 0.2 mag. The object is not detected in the Ks band data, which show a 3-σ contrast limit of 5.2 mag at the position of the detected L' source. For a distance of 108 pc, the detected companion candidate is located at 6.7 AU from the primary, well within the disk gap. If T Cha and the companion candidate are bound, the comparison of the L' and Ks photometry with evolutionary tracks shows that the photometry is inconsistent with any unextincted photosphere at the age and distance of T Cha. The detected object shows a very red Ks - L' color, for which a possible explanation would be a significant amount of dust around it. This would imply that the companion candidate is young, which would strengthen the case for a physical companion, and moreover that the object would be in the substellar regime, according to the Ks upper limit. Another exciting possibility would be that this companion is a recently formed planet within the disk. Additional observations are mandatory to confirm that the object is bound and to properly characterize it. Based on observations obtained at the European Southern Observatory using the Very Large Telescope in Cerro Paranal, Chile, under program 84.C-0755(A).

  16. Muzzle flash localization for the dismounted soldier

    NASA Astrophysics Data System (ADS)

    Kennedy Scott, Will

    2015-05-01

    The ability to accurately and rapidly know the precise location of enemy fire would be a substantial capability enhancement to the dismounted soldier. Acoustic gun-shot detections systems can provide an approximate bearing but it is desired to precisely know the location (direction and range) of enemy fire; for example to know from `which window' the fire is coming from. Funded by the UK MOD (via Roke Manor Research) QinetiQ is developing an imaging solution built around an InGaAs camera. This paper presents work that QinetiQ has undertaken on the Muzzle Flash Locator system. Key technical challenges that have been overcome are explained and discussed in this paper. They include; the design of the optical sensor and processing hardware to meet low size, weight and power requirements; the algorithm approach required to maintain sensitivity whilst rejecting false alarms from sources such as close passing insects and sun glint from scene objects; and operation on the move. This work shows that such a sensor can provide sufficient sensitivity to detect muzzle flash events to militarily significant ranges and that such a system can be combined with an acoustic gunshot detection system to minimize the false alarm rate. The muzzle flash sensor developed in this work operates in real-time and has a field of view of approximately 29° (horizontal) by 12° (vertical) with a pixel resolution of 0.13°. The work has demonstrated that extension to a sensor with realistic angular rotation rate is feasible.

  17. Inversion Method for Early Detection of ARES-1 Case Breach Failure

    NASA Technical Reports Server (NTRS)

    Mackey, Ryan M.; Kulikov, Igor K.; Bajwa, Anupa; Berg, Peter; Smelyanskiy, Vadim

    2010-01-01

    A document describes research into the problem of detecting a case breach formation at an early stage of a rocket flight. An inversion algorithm for case breach allocation is proposed and analyzed. It is shown how the case breach can be allocated at an early stage of its development by using the rocket sensor data and the output data from the control block of the rocket navigation system. The results are simulated with MATLAB/Simulink software. The efficiency of an inversion algorithm for a case breach location is discussed. The research was devoted to the analysis of the ARES-l flight during the first 120 seconds after the launch and early prediction of case breach failure. During this time, the rocket is propelled by its first-stage Solid Rocket Booster (SRB). If a breach appears in SRB case, the gases escaping through it will produce the (side) thrust directed perpendicular to the rocket axis. The side thrust creates torque influencing the rocket attitude. The ARES-l control system will compensate for the side thrust until it reaches some critical value, after which the flight will be uncontrollable. The objective of this work was to obtain the start time of case breach development and its location using the rocket inertial navigation sensors and GNC data. The algorithm was effective for the detection and location of a breach in an SRB field joint at an early stage of its development.

  18. Implicit Shape Models for Object Detection in 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Velizhev, A.; Shapovalov, R.; Schindler, K.

    2012-07-01

    We present a method for automatic object localization and recognition in 3D point clouds representing outdoor urban scenes. The method is based on the implicit shape models (ISM) framework, which recognizes objects by voting for their center locations. It requires only few training examples per class, which is an important property for practical use. We also introduce and evaluate an improved version of the spin image descriptor, more robust to point density variation and uncertainty in normal direction estimation. Our experiments reveal a significant impact of these modifications on the recognition performance. We compare our results against the state-of-the-art method and get significant improvement in both precision and recall on the Ohio dataset, consisting of combined aerial and terrestrial LiDAR scans of 150,000 m2 of urban area in total.

  19. Towards photometry pipeline of the Indonesian space surveillance system

    NASA Astrophysics Data System (ADS)

    Priyatikanto, Rhorom; Religia, Bahar; Rachman, Abdul; Dani, Tiar

    2015-09-01

    Optical observation through sub-meter telescope equipped with CCD camera becomes alternative method for increasing orbital debris detection and surveillance. This observational mode is expected to eye medium-sized objects in higher orbits (e.g. MEO, GTO, GSO & GEO), beyond the reach of usual radar system. However, such observation of fast moving objects demands special treatment and analysis technique. In this study, we performed photometric analysis of the satellite track images photographed using rehabilitated Schmidt Bima Sakti telescope in Bosscha Observatory. The Hough transformation was implemented to automatically detect linear streak from the images. From this analysis and comparison to USSPACECOM catalog, two satellites were identified and associated with inactive Thuraya-3 satellite and Satcom-3 debris which are located at geostationary orbit. Further aperture photometry analysis revealed the periodicity of tumbling Satcom-3 debris. In the near future, it is not impossible to apply similar scheme to establish an analysis pipeline for optical space surveillance system hosted in Indonesia.

  20. Quantitative Assessment of Detection Frequency for the INL Ambient Air Monitoring Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sondrup, A. Jeffrey; Rood, Arthur S.

    A quantitative assessment of the Idaho National Laboratory (INL) air monitoring network was performed using frequency of detection as the performance metric. The INL air monitoring network consists of 37 low-volume air samplers in 31 different locations. Twenty of the samplers are located on INL (onsite) and 17 are located off INL (offsite). Detection frequencies were calculated using both BEA and ESER laboratory minimum detectable activity (MDA) levels. The CALPUFF Lagrangian puff dispersion model, coupled with 1 year of meteorological data, was used to calculate time-integrated concentrations at sampler locations for a 1-hour release of unit activity (1 Ci) formore » every hour of the year. The unit-activity time-integrated concentration (TICu) values were calculated at all samplers for releases from eight INL facilities. The TICu values were then scaled and integrated for a given release quantity and release duration. All facilities modeled a ground-level release emanating either from the center of the facility or at a point where significant emissions are possible. In addition to ground-level releases, three existing stacks at the Advanced Test Reactor Complex, Idaho Nuclear Technology and Engineering Center, and Material and Fuels Complex were also modeled. Meteorological data from the 35 stations comprising the INL Mesonet network, data from the Idaho Falls Regional airport, upper air data from the Boise airport, and three-dimensional gridded data from the weather research forecasting model were used for modeling. Three representative radionuclides identified as key radionuclides in INL’s annual National Emission Standards for Hazardous Air Pollutants evaluations were considered for the frequency of detection analysis: Cs-137 (beta-gamma emitter), Pu-239 (alpha emitter), and Sr-90 (beta emitter). Source-specific release quantities were calculated for each radionuclide, such that the maximum inhalation dose at any publicly accessible sampler or the National Emission Standards for Hazardous Air Pollutants maximum exposed individual location (i.e., Frenchman’s Cabin) was no more than 0.1 mrem yr–1 (i.e., 1% of the 10 mrem yr–1 standard). Detection frequencies were calculated separately for the onsite and offsite monitoring network. As expected, detection frequencies were generally less for the offsite sampling network compared to the onsite network. Overall, the monitoring network is very effective at detecting the potential releases of Cs-137 or Sr-90 from all sources/facilities using either the ESER or BEA MDAs. The network was less effective at detecting releases of Pu-239. Maximum detection frequencies for Pu-239 using ESER MDAs ranged from 27.4 to 100% for onsite samplers and 3 to 80% for offsite samplers. Using BEA MDAs, the maximum detection frequencies for Pu-239 ranged from 2.1 to 100% for onsite samplers and 0 to 5.9% for offsite samplers. The only release that was not detected by any of the samplers under any conditions was a release of Pu-239 from the Idaho Nuclear Technology and Engineering Center main stack (CPP-708). The methodology described in this report could be used to improve sampler placement and detection frequency, provided clear performance objectives are defined.« less

  1. Pedestrian detection from thermal images: A sparse representation based approach

    NASA Astrophysics Data System (ADS)

    Qi, Bin; John, Vijay; Liu, Zheng; Mita, Seiichi

    2016-05-01

    Pedestrian detection, a key technology in computer vision, plays a paramount role in the applications of advanced driver assistant systems (ADASs) and autonomous vehicles. The objective of pedestrian detection is to identify and locate people in a dynamic environment so that accidents can be avoided. With significant variations introduced by illumination, occlusion, articulated pose, and complex background, pedestrian detection is a challenging task for visual perception. Different from visible images, thermal images are captured and presented with intensity maps based objects' emissivity, and thus have an enhanced spectral range to make human beings perceptible from the cool background. In this study, a sparse representation based approach is proposed for pedestrian detection from thermal images. We first adopted the histogram of sparse code to represent image features and then detect pedestrian with the extracted features in an unimodal and a multimodal framework respectively. In the unimodal framework, two types of dictionaries, i.e. joint dictionary and individual dictionary, are built by learning from prepared training samples. In the multimodal framework, a weighted fusion scheme is proposed to further highlight the contributions from features with higher separability. To validate the proposed approach, experiments were conducted to compare with three widely used features: Haar wavelets (HWs), histogram of oriented gradients (HOG), and histogram of phase congruency (HPC) as well as two classification methods, i.e. AdaBoost and support vector machine (SVM). Experimental results on a publicly available data set demonstrate the superiority of the proposed approach.

  2. Robust multiperson detection and tracking for mobile service and social robots.

    PubMed

    Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou

    2012-10-01

    This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.

  3. Mapping gray-scale image to 3D surface scanning data by ray tracing

    NASA Astrophysics Data System (ADS)

    Li, Peng; Jones, Peter R. M.

    1997-03-01

    The extraction and location of feature points from range imaging is an important but difficult task in machine vision based measurement systems. There exist some feature points which are not able to be detected from pure geometric characteristics, particularly in those measurement tasks related to the human body. The Loughborough Anthropometric Shadow Scanner (LASS) is a whole body surface scanner based on structured light technique. Certain applications of LASS require accurate location of anthropometric landmarks from the scanned data. This is sometimes impossible from existing raw data because some landmarks do not appear in the scanned data. Identification of these landmarks has to resort to surface texture of the scanned object. Modifications to LASS were made to allow gray-scale images to be captured before or after the object was scanned. Two-dimensional gray-scale image must be mapped to the scanned data to acquire the 3D coordinates of a landmark. The method to map 2D images to the scanned data is based on the colinearity conditions and ray-tracing method. If the camera center and image coordinates are known, the corresponding object point must lie on a ray starting from the camera center and connecting to the image coordinate. By intersecting the ray with the scanned surface of the object, the 3D coordinates of a point can be solved. Experimentation has demonstrated the feasibility of the method.

  4. Seeing with sound? exploring different characteristics of a visual-to-auditory sensory substitution device.

    PubMed

    Brown, David; Macpherson, Tom; Ward, Jamie

    2011-01-01

    Sensory substitution devices convert live visual images into auditory signals, for example with a web camera (to record the images), a computer (to perform the conversion) and headphones (to listen to the sounds). In a series of three experiments, the performance of one such device ('The vOICe') was assessed under various conditions on blindfolded sighted participants. The main task that we used involved identifying and locating objects placed on a table by holding a webcam (like a flashlight) or wearing it on the head (like a miner's light). Identifying objects on a table was easier with a hand-held device, but locating the objects was easier with a head-mounted device. Brightness converted into loudness was less effective than the reverse contrast (dark being loud), suggesting that performance under these conditions (natural indoor lighting, novice users) is related more to the properties of the auditory signal (ie the amount of noise in it) than the cross-modal association between loudness and brightness. Individual differences in musical memory (detecting pitch changes in two sequences of notes) was related to the time taken to identify or recognise objects, but individual differences in self-reported vividness of visual imagery did not reliably predict performance across the experiments. In general, the results suggest that the auditory characteristics of the device may be more important for initial learning than visual associations.

  5. Spatiotemporal distribution of location and object effects in reach-to-grasp kinematics

    PubMed Central

    Rouse, Adam G.

    2015-01-01

    In reaching to grasp an object, the arm transports the hand to the intended location as the hand shapes to grasp the object. Prior studies that tracked arm endpoint and grip aperture have shown that reaching and grasping, while proceeding in parallel, are interdependent to some degree. Other studies of reaching and grasping that have examined the joint angles of all five digits as the hand shapes to grasp various objects have not tracked the joint angles of the arm as well. We, therefore, examined 22 joint angles from the shoulder to the five digits as monkeys reached, grasped, and manipulated in a task that dissociated location and object. We quantified the extent to which each angle varied depending on location, on object, and on their interaction, all as a function of time. Although joint angles varied depending on both location and object beginning early in the movement, an early phase of location effects in joint angles from the shoulder to the digits was followed by a later phase in which object effects predominated at all joint angles distal to the shoulder. Interaction effects were relatively small throughout the reach-to-grasp. Whereas reach trajectory was influenced substantially by the object, grasp shape was comparatively invariant to location. Our observations suggest that neural control of reach-to-grasp may occur largely in two sequential phases: the first determining the location to which the arm transports the hand, and the second shaping the entire upper extremity to grasp and manipulate the object. PMID:26445870

  6. Imaging, object detection, and change detection with a polarized multistatic GPR array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beer, N. Reginald; Paglieroni, David W.

    A polarized detection system performs imaging, object detection, and change detection factoring in the orientation of an object relative to the orientation of transceivers. The polarized detection system may operate on one of several modes of operation based on whether the imaging, object detection, or change detection is performed separately for each transceiver orientation. In combined change mode, the polarized detection system performs imaging, object detection, and change detection separately for each transceiver orientation, and then combines changes across polarizations. In combined object mode, the polarized detection system performs imaging and object detection separately for each transceiver orientation, and thenmore » combines objects across polarizations and performs change detection on the result. In combined image mode, the polarized detection system performs imaging separately for each transceiver orientation, and then combines images across polarizations and performs object detection followed by change detection on the result.« less

  7. Probabilistic multi-person localisation and tracking in image sequences

    NASA Astrophysics Data System (ADS)

    Klinger, T.; Rottensteiner, F.; Heipke, C.

    2017-05-01

    The localisation and tracking of persons in image sequences in commonly guided by recursive filters. Especially in a multi-object tracking environment, where mutual occlusions are inherent, the predictive model is prone to drift away from the actual target position when not taking context into account. Further, if the image-based observations are imprecise, the trajectory is prone to be updated towards a wrong position. In this work we address both these problems by using a new predictive model on the basis of Gaussian Process Regression, and by using generic object detection, as well as instance-specific classification, for refined localisation. The predictive model takes into account the motion of every tracked pedestrian in the scene and the prediction is executed with respect to the velocities of neighbouring persons. In contrast to existing methods our approach uses a Dynamic Bayesian Network in which the state vector of a recursive Bayes filter, as well as the location of the tracked object in the image, are modelled as unknowns. This allows the detection to be corrected before it is incorporated into the recursive filter. Our method is evaluated on a publicly available benchmark dataset and outperforms related methods in terms of geometric precision and tracking accuracy.

  8. Detection Limit of Smectite by Chemin IV Laboratory Instrument: Preliminary Implications for Chemin on the Mars Science Laboratory Mission

    NASA Technical Reports Server (NTRS)

    Archilles, Cherie; Ming, D. W.; Morris, R. V.; Blake, D. F.

    2011-01-01

    The CheMin instrument on the Mars Science Laboratory (MSL) is an miniature X-ray diffraction (XRD) and X-ray fluorescence (XRF) instrument capable of detecting the mineralogical and elemental compositions of rocks, outcrops and soils on the surface of Mars. CheMin uses a microfocus-source Co X-ray tube, a transmission sample cell, and an energy-discriminating X-ray sensitive CCD to produce simultaneous 2-D XRD patterns and energy-dispersive X-ray histograms from powdered samples. CRISM and OMEGA have identified the presence of phyllosilicates at several locations on Mars including the four candidate MSL landing sites. The objective of this study was to conduct preliminary studies to determine the CheMin detection limit of smectite in a smectite/olivine mixed mineral system.

  9. Detection of molecular gas in the quasar BR1202 - 0725 at redshift z = 4.69.

    PubMed

    Ohta, K; Yamada, T; Nakanishi, K; Kohno, K; Akiyama, M; Kawabe, R

    1996-08-01

    Although great efforts have been made to locate molecular gas--the material out of which stars form--in the early Universe, there have been only two firm detections at high redshift. Both are gravitationally lensed objects at redshift z approximately = 2.5 (refs 9-14). Here we report the detection of CO emission from the radio-quiet quasar BR1202 - 0725, which is at redshift z = 4.69. From the observed CO luminosity, we estimate that almost 10(11) solar masses of molecular hydrogen are associated with the quasar; this is comparable to the stellar mass of a present-day luminous galaxy. Our results suggest that BR1202 - 0725 is a massive galaxy, in which the gas is largely concentrated in the central region, and that is currently undergoing a large burst of star formation.

  10. Changes of EEG Spectra and Functional Connectivity during an Object-Location Memory Task in Alzheimer's Disease.

    PubMed

    Han, Yuliang; Wang, Kai; Jia, Jianjun; Wu, Weiping

    2017-01-01

    Object-location memory is particularly fragile and specifically impaired in Alzheimer's disease (AD) patients. Electroencephalogram (EEG) was utilized to objectively measure memory impairment for memory formation correlates of EEG oscillatory activities. We aimed to construct an object-location memory paradigm and explore EEG signs of it. Two groups of 20 probable mild AD patients and 19 healthy older adults were included in a cross-sectional analysis. All subjects took an object-location memory task. EEG recordings performed during object-location memory tasks were compared between the two groups in the two EEG parameters (spectral parameters and phase synchronization). The memory performance of AD patients was worse than that of healthy elderly adults The power of object-location memory of the AD group was significantly higher than the NC group (healthy elderly adults) in the alpha band in the encoding session, and alpha and theta bands in the retrieval session. The channels-pairs the phase lag index value of object-location memory in the AD group was clearly higher than the NC group in the delta, theta, and alpha bands in encoding sessions and delta and theta bands in retrieval sessions. The results provide support for the hypothesis that the AD patients may use compensation mechanisms to remember the items and episode.

  11. Differential Group-Velocity Detection of Fluid Paths Leland Timothy Long

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Leland Timothy

    2003-06-01

    The objective of differential surface-wave interpretation is to identify and locate temporal perturbations in the shear-wave velocity. Perturbations in phase velocity are created when the stress and/or fluid content of soils changes, such as in pumping to remove or flush out contaminants. Differential surface wave analysis is a potential method to track the movement of fluids during remediation programs. This proposal is to develop and test this new technology to aid in the selection and design of remediation options in shallow aquifers.

  12. Acoustic Inverse Scattering for Breast Cancer Microcalcification Detection. Addendum

    DTIC Science & Technology

    2011-12-01

    the center. To conserve space, few are shown here. A graph comparing the spatial location and the error in reconstruction will follow...following graphs show the error in reconstruction as a function of position of the object along the x-axis, y-axis and the diagonal in the fourth quadrant of...the well-known Kirchhoff – Poisson formulas (see, e.g., Refs. [33,34]) allow one to rep- resent the solution p(x,t) in terms of the spherical means

  13. Ammonia Leak Locator Study

    NASA Technical Reports Server (NTRS)

    Dodge, Franklin T.; Wuest, Martin P.; Deffenbaugh, Danny M.

    1995-01-01

    The thermal control system of International Space Station Alpha will use liquid ammonia as the heat exchange fluid. It is expected that small leaks (of the order perhaps of one pound of ammonia per day) may develop in the lines transporting the ammonia to the various facilities as well as in the heat exchange equipment. Such leaks must be detected and located before the supply of ammonia becomes critically low. For that reason, NASA-JSC has a program underway to evaluate instruments that can detect and locate ultra-small concentrations of ammonia in a high vacuum environment. To be useful, the instrument must be portable and small enough that an astronaut can easily handle it during extravehicular activity. An additional complication in the design of the instrument is that the environment immediately surrounding ISSA will contain small concentrations of many other gases from venting of onboard experiments as well as from other kinds of leaks. These other vapors include water, cabin air, CO2, CO, argon, N2, and ethylene glycol. Altogether, this local environment might have a pressure of the order of 10(exp -7) to 10(exp -6) torr. Southwest Research Institute (SwRI) was contracted by NASA-JSC to provide support to NASA-JSC and its prime contractors in evaluating ammonia-location instruments and to make a preliminary trade study of the advantages and limitations of potential instruments. The present effort builds upon an earlier SwRI study to evaluate ammonia leak detection instruments [Jolly and Deffenbaugh]. The objectives of the present effort include: (1) Estimate the characteristics of representative ammonia leaks; (2) Evaluate the baseline instrument in the light of the estimated ammonia leak characteristics; (3) Propose alternative instrument concepts; and (4) Conduct a trade study of the proposed alternative concepts and recommend promising instruments. The baseline leak-location instrument selected by NASA-JSC was an ion gauge.

  14. Use of Spatial Epidemiology and Hot Spot Analysis to Target Women Eligible for Prenatal Women, Infants, and Children Services

    PubMed Central

    Krawczyk, Christopher; Gradziel, Pat; Geraghty, Estella M.

    2014-01-01

    Objectives. We used a geographic information system and cluster analyses to determine locations in need of enhanced Special Supplemental Nutrition Program for Women, Infants, and Children (WIC) Program services. Methods. We linked documented births in the 2010 California Birth Statistical Master File with the 2010 data from the WIC Integrated Statewide Information System. Analyses focused on the density of pregnant women who were eligible for but not receiving WIC services in California’s 7049 census tracts. We used incremental spatial autocorrelation and hot spot analyses to identify clusters of WIC-eligible nonparticipants. Results. We detected clusters of census tracts with higher-than-expected densities, compared with the state mean density of WIC-eligible nonparticipants, in 21 of 58 (36.2%) California counties (P < .05). In subsequent county-level analyses, we located neighborhood-level clusters of higher-than-expected densities of eligible nonparticipants in Sacramento, San Francisco, Fresno, and Los Angeles Counties (P < .05). Conclusions. Hot spot analyses provided a rigorous and objective approach to determine the locations of statistically significant clusters of WIC-eligible nonparticipants. Results helped inform WIC program and funding decisions, including the opening of new WIC centers, and offered a novel approach for targeting public health services. PMID:24354821

  15. A combined joint diagonalization-MUSIC algorithm for subsurface targets localization

    NASA Astrophysics Data System (ADS)

    Wang, Yinlin; Sigman, John B.; Barrowes, Benjamin E.; O'Neill, Kevin; Shubitidze, Fridon

    2014-06-01

    This paper presents a combined joint diagonalization (JD) and multiple signal classification (MUSIC) algorithm for estimating subsurface objects locations from electromagnetic induction (EMI) sensor data, without solving ill-posed inverse-scattering problems. JD is a numerical technique that finds the common eigenvectors that diagonalize a set of multistatic response (MSR) matrices measured by a time-domain EMI sensor. Eigenvalues from targets of interest (TOI) can be then distinguished automatically from noise-related eigenvalues. Filtering is also carried out in JD to improve the signal-to-noise ratio (SNR) of the data. The MUSIC algorithm utilizes the orthogonality between the signal and noise subspaces in the MSR matrix, which can be separated with information provided by JD. An array of theoreticallycalculated Green's functions are then projected onto the noise subspace, and the location of the target is estimated by the minimum of the projection owing to the orthogonality. This combined method is applied to data from the Time-Domain Electromagnetic Multisensor Towed Array Detection System (TEMTADS). Examples of TEMTADS test stand data and field data collected at Spencer Range, Tennessee are analyzed and presented. Results indicate that due to its noniterative mechanism, the method can be executed fast enough to provide real-time estimation of objects' locations in the field.

  16. Geophysical investigation, Salmon Site, Lamar County, Mississippi

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    Geophysical surveys were conducted in 1992 and 1993 on 21 sites at the Salmon Site (SS) located in Lamar County, Mississippi. The studies are part of the Remedial Investigation/Feasibility Study (RI/FS) being conducted by IT Corporation for the U.S. Department of Energy (DOE). During the 1960s, two nuclear devices and two chemical tests were detonated 826 meters (in) (2710 feet [ft]) below the ground surface in the salt dome underlying the SS. These tests were part of the Vela Uniform Program conducted to improve the United States capability to detect, identify, and locate underground nuclear detonations. The RI/FS is beingmore » conducted to determine if any contamination is migrating from the underground shot cavity in the salt dome and if there is any residual contamination in the near surface mud and debris disposal pits used during the testing activities. The objective of the surface geophysical surveys was to locate buried debris, disposal pits, and abandoned mud pits that may be present at the site. This information will then be used to identify the locations for test pits, cone penetrometer tests, and drill hole/monitor well installation. The disposal pits were used during the operation of the test site in the 1960s. Vertical magnetic gradient (magnetic gradient), electromagnetic (EM) conductivity, and ground-penetrating radar (GPR) surveys were used to accomplish these objectives. A description of the equipment used and a theoretical discussion of the geophysical methods are presented Appendix A. Because of the large number of figures relative to the number of pages of text, the geophysical grid-location maps, the contour maps of the magnetic-gradient data, the contour maps of the EM conductivity data, and the GPR traverse location maps are located in Appendix B, Tabs I through 22. In addition, selected GPR records are located in Appendix C.« less

  17. Multirobot autonomous landmine detection using distributed multisensor information aggregation

    NASA Astrophysics Data System (ADS)

    Jumadinova, Janyl; Dasgupta, Prithviraj

    2012-06-01

    We consider the problem of distributed sensor information fusion by multiple autonomous robots within the context of landmine detection. We assume that different landmines can be composed of different types of material and robots are equipped with different types of sensors, while each robot has only one type of landmine detection sensor on it. We introduce a novel technique that uses a market-based information aggregation mechanism called a prediction market. Each robot is provided with a software agent that uses sensory input of the robot and performs calculations of the prediction market technique. The result of the agent's calculations is a 'belief' representing the confidence of the agent in identifying the object as a landmine. The beliefs from different robots are aggregated by the market mechanism and passed on to a decision maker agent. The decision maker agent uses this aggregate belief information about a potential landmine and makes decisions about which other robots should be deployed to its location, so that the landmine can be confirmed rapidly and accurately. Our experimental results show that, for identical data distributions and settings, using our prediction market-based information aggregation technique increases the accuracy of object classification favorably as compared to two other commonly used techniques.

  18. Gas chimney detection based on improving the performance of combined multilayer perceptron and support vector classifier

    NASA Astrophysics Data System (ADS)

    Hashemi, H.; Tax, D. M. J.; Duin, R. P. W.; Javaherian, A.; de Groot, P.

    2008-11-01

    Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a statistical feature ranking technique and combining different classifiers. The method, which has general applicability, is demonstrated here on a gas chimney detection problem. First, we evaluate a set of input seismic attributes extracted at locations labeled by a human expert using regularized discriminant analysis (RDA). In order to find the RDA score for each seismic attribute, forward and backward search strategies are used. Subsequently, two non-linear classifiers: multilayer perceptron (MLP) and support vector classifier (SVC) are run on the ranked seismic attributes. Finally, to capitalize on the intrinsic differences between both classifiers, the MLP and SVC results are combined using logical rules of maximum, minimum and mean. The proposed method optimizes the ranked feature space size and yields the lowest classification error in the final combined result. We will show that the logical minimum reveals gas chimneys that exhibit both the softness of MLP and the resolution of SVC classifiers.

  19. Smart security system for Indian rail wagons using IOT

    NASA Astrophysics Data System (ADS)

    Bhanuteja, S.; Shilpi, S.; Pragna, K.; Arun, M.

    2017-11-01

    The objective of this project is to create a Security System for the goods that are carried in open top freight trains. The most efficient way to secure anything from thieves is to have a continuous observation. So for continuous observation of the open top freight train, Camera module2 has been used. Passive Infrared Sensor (PIR) 1 has been used to detect the motion or to sense movement of people, animals, or any object. So whenever a motion is detected by the PIR sensor, the Camera takes a picture of that particular instance. That picture will be send to the Raspberry PI which does Skin Detection Algorithm and specifies whether that motion was created by a human or not. If a human makes it, then that picture will send to the drop box. Any Official can have a look at the same. The existing system has a CCTV installed at various critical locations like bridges, railway stations etc. but they does not provide a continuous observation. This paper describes about the Security System that provides continuous observation for open top freight trains so that goods can be carried safely to its destination.

  20. Joint Optimization of Fluence Field Modulation and Regularization in Task-Driven Computed Tomography.

    PubMed

    Gang, G J; Siewerdsen, J H; Stayman, J W

    2017-02-11

    This work presents a task-driven joint optimization of fluence field modulation (FFM) and regularization in quadratic penalized-likelihood (PL) reconstruction. Conventional FFM strategies proposed for filtered-backprojection (FBP) are evaluated in the context of PL reconstruction for comparison. We present a task-driven framework that leverages prior knowledge of the patient anatomy and imaging task to identify FFM and regularization. We adopted a maxi-min objective that ensures a minimum level of detectability index ( d' ) across sample locations in the image volume. The FFM designs were parameterized by 2D Gaussian basis functions to reduce dimensionality of the optimization and basis function coefficients were estimated using the covariance matrix adaptation evolutionary strategy (CMA-ES) algorithm. The FFM was jointly optimized with both space-invariant and spatially-varying regularization strength ( β ) - the former via an exhaustive search through discrete values and the latter using an alternating optimization where β was exhaustively optimized locally and interpolated to form a spatially-varying map. The optimal FFM inverts as β increases, demonstrating the importance of a joint optimization. For the task and object investigated, the optimal FFM assigns more fluence through less attenuating views, counter to conventional FFM schemes proposed for FBP. The maxi-min objective homogenizes detectability throughout the image and achieves a higher minimum detectability than conventional FFM strategies. The task-driven FFM designs found in this work are counter to conventional patterns for FBP and yield better performance in terms of the maxi-min objective, suggesting opportunities for improved image quality and/or dose reduction when model-based reconstructions are applied in conjunction with FFM.

  1. [Diagnostic management in paediatric blunt abdominal trauma - a systematic review with metaanalysis].

    PubMed

    Schöneberg, C; Tampier, S; Hussmann, B; Lendemans, S; Waydhas, C

    2014-12-01

    The objective of this systematic review was to investigate the diagnostic management in paediatric blunt abdominal injuries. A literature research was performed using following sources: MEDLINE, Embase and Cochrane. Where it was possible a meta-analysis was performed. Furthermore the level of evidence for all publications was assigned. Indicators for intraabdominal injury (IAI) were elevated liver transaminases, abnormal abdominal examinations, low systolic blood pressure, reduced haematocrit and microhematuria. Detecting IAI with focused assessment with sonography for trauma (FAST) had an overall sensitivity of 56.5 %, a specificity of 94.68 %, a positive likelihood ratio of 10.63 and a negative likelihood ratio of 0.46. The accuracy was 84.02 %. Among haemodynamically unstable children the sensitivity and specificity were 100 %. The overall prevalence of IAI and negative CT was 0.19 %. The NPV of abdominal CT for diagnosing IAI was 99.8 %. The laparotomy rate in patients with isolated intraperitoneal fluid (IIF) in one location was 3.48 % and 56.52 % in patients with IIF in more than one location. FAST as an isolated tool in the diagnostics after blunt abdominal injury is very uncertain, because of the modest sensitivity. Discharging children after blunt abdominal trauma with a negative abdominal CT scan seems to be safe. When IIF is detected on CT scan, it depends on the number of locations involved. If IIF is found only in 1 location, IAI is uncommon, while IIF in two or more locations results in a high laparotomy rate. Georg Thieme Verlag KG Stuttgart · New York.

  2. Object recognition contributions to figure-ground organization: operations on outlines and subjective contours.

    PubMed

    Peterson, M A; Gibson, B S

    1994-11-01

    In previous research, replicated here, we found that some object recognition processes influence figure-ground organization. We have proposed that these object recognition processes operate on edges (or contours) detected early in visual processing, rather than on regions. Consistent with this proposal, influences from object recognition on figure-ground organization were previously observed in both pictures and stereograms depicting regions of different luminance, but not in random-dot stereograms, where edges arise late in processing (Peterson & Gibson, 1993). In the present experiments, we examined whether or not two other types of contours--outlines and subjective contours--enable object recognition influences on figure-ground organization. For both types of contours we observed a pattern of effects similar to that originally obtained with luminance edges. The results of these experiments are valuable for distinguishing between alternative views of the mechanisms mediating object recognition influences on figure-ground organization. In addition, in both Experiments 1 and 2, fixated regions were seen as figure longer than nonfixated regions, suggesting that fixation location must be included among the variables relevant to figure-ground organization.

  3. The timecourse of space- and object-based attentional prioritization with varying degrees of certainty

    PubMed Central

    Drummond, Leslie; Shomstein, Sarah

    2013-01-01

    The relative contributions of objects (i.e., object-based) and underlying spatial (i.e., space-based representations) to attentional prioritization and selection remain unclear. In most experimental circumstances, the two representations overlap thus their respective contributions cannot be evaluated. Here, a dynamic version of the two-rectangle paradigm allowed for a successful de-coupling of spatial and object representations. Space-based (cued spatial location), cued end of the object, and object-based (locations within the cued object) effects were sampled at several timepoints following the cue with high or low certainty as to target location. In the high uncertainty condition spatial benefits prevailed throughout most of the timecourse, as evidenced by facilitatory and inhibitory effects. Additionally, the cued end of the object, rather than a whole object, received the attentional benefit. When target location was predictable (low uncertainty manipulation), only probabilities guided selection (i.e., evidence by a benefit for the statistically biased location). These results suggest that with high spatial uncertainty, all available information present within the stimulus display is used for the purposes of attentional selection (e.g., spatial locations, cued end of the object) albeit to varying degrees and at different time points. However, as certainty increases, only spatial certainty guides selection (i.e., object ends and whole objects are filtered out). Taken together, these results further elucidate the contributing role of space- and object-representations to attentional guidance. PMID:24367302

  4. Detection of small human cerebral cortical lesions with MRI under different levels of Gaussian smoothing: applications in epilepsy

    NASA Astrophysics Data System (ADS)

    Cantor-Rivera, Diego; Goubran, Maged; Kraguljac, Alan; Bartha, Robert; Peters, Terry

    2010-03-01

    The main objective of this study was to assess the effect of smoothing filter selection in Voxel-Based Morphometry studies on structural T1-weighted magnetic resonance images. Gaussian filters of 4 mm, 8 mm or 10 mm Full Width at High Maximum are commonly used, based on the assumption that the filter size should be at least twice the voxel size to obtain robust statistical results. The hypothesis of the presented work was that the selection of the smoothing filter influenced the detectability of small lesions in the brain. Mesial Temporal Sclerosis associated to Epilepsy was used as the case to demonstrate this effect. Twenty T1-weighted MRIs from the BrainWeb database were selected. A small phantom lesion was placed in the amygdala, hippocampus, or parahippocampal gyrus of ten of the images. Subsequently the images were registered to the ICBM/MNI space. After grey matter segmentation, a T-test was carried out to compare each image containing a phantom lesion with the rest of the images in the set. For each lesion the T-test was repeated with different Gaussian filter sizes. Voxel-Based Morphometry detected some of the phantom lesions. Of the three parameters considered: location,size, and intensity; it was shown that location is the dominant factor for the detection of the lesions.

  5. Advances in the use of odour as forensic evidence through optimizing and standardizing instruments and canines

    PubMed Central

    Furton, Kenneth G.; Caraballo, Norma Iris; Cerreta, Michelle M.; Holness, Howard K.

    2015-01-01

    This paper explores the advances made in identifying trace amounts of volatile organic compounds (VOCs) that originate from forensic specimens, such as drugs, explosives, live human scent and the scent of death, as well as the probative value for detecting such odours. The ability to locate and identify the VOCs liberated from or left by forensic substances is of increasing importance to criminal investigations as it can indicate the presence of contraband and/or associate an individual to a particular location or object. Although instruments have improved significantly in recent decades—with sensitivities now rivalling that of biological detectors—it is widely recognized that canines are generally still more superior for the detection of odourants due to their speed, versatility, ruggedness and discriminating power. Through advancements in the detection of VOCs, as well as increased standardization efforts for instruments and canines, the reliability of odour as evidence has continuously improved and is likely to continue to do so. Moreover, several legal cases in which this novel form of evidence has been accepted into US courts of law are discussed. As the development and implementation of best practice guidelines for canines and instruments increase, their reliability in detecting VOCs of interest should continue to improve, expanding the use of odour as an acceptable form of forensic evidence. PMID:26101287

  6. Lightning Imaging Sensor (LIS) for the Earth Observing System

    NASA Technical Reports Server (NTRS)

    Christian, Hugh J.; Blakeslee, Richard J.; Goodman, Steven J.

    1992-01-01

    Not only are scientific objectives and instrument characteristics given of a calibrated optical LIS for the EOS but also for the Tropical Rainfall Measuring Mission (TRMM) which was designed to acquire and study the distribution and variability of total lightning on a global basis. The LIS can be traced to a lightning mapper sensor planned for flight on the GOES meteorological satellites. The LIS consists of a staring imager optimized to detect and locate lightning. The LIS will detect and locate lightning with storm scale resolution (i.e., 5 to 10 km) over a large region of the Earth's surface along the orbital track of the satellite, mark the time of occurrence of the lightning, and measure the radiant energy. The LIS will have a nearly uniform 90 pct. detection efficiency within the area viewed by the sensor, and will detect intracloud and cloud-to-ground discharges during day and night conditions. Also, the LIS will monitor individual storms and storm systems long enough to obtain a measure of the lightning flashing rate when they are within the field of view of the LIS. The LIS attributes include low cost, low weight and power, low data rate, and important science. The LIS will study the hydrological cycle, general circulation and sea surface temperature variations, along with examinations of the electrical coupling of thunderstorms with the ionosphere and magnetosphere, and observations and modeling of the global electric circuit.

  7. Brain regions involved in subprocesses of small-space episodic object-location memory: a systematic review of lesion and functional neuroimaging studies.

    PubMed

    Zimmermann, Kathrin; Eschen, Anne

    2017-04-01

    Object-location memory (OLM) enables us to keep track of the locations of objects in our environment. The neurocognitive model of OLM (Postma, A., Kessels, R. P. C., & Van Asselen, M. (2004). The neuropsychology of object-location memory. In G. L. Allen (Ed.), Human spatial memory: Remembering where (pp. 143-160). Mahwah, NJ: Lawrence Erlbaum, Postma, A., Kessels, R. P. C., & Van Asselen, M. (2008). How the brain remembers and forgets where things are: The neurocognition of object-location memory. Neuroscience & Biobehavioral Reviews, 32, 1339-1345. doi: 10.1016/j.neubiorev.2008.05.001 ) proposes that distinct brain regions are specialised for different subprocesses of OLM (object processing, location processing, and object-location binding; categorical and coordinate OLM; egocentric and allocentric OLM). It was based mainly on findings from lesion studies. However, recent episodic memory studies point to a contribution of additional or different brain regions to object and location processing within episodic OLM. To evaluate and update the neurocognitive model of OLM, we therefore conducted a systematic literature search for lesion as well as functional neuroimaging studies contrasting small-space episodic OLM with object memory or location memory. We identified 10 relevant lesion studies and 8 relevant functional neuroimaging studies. We could confirm some of the proposals of the neurocognitive model of OLM, but also differing hypotheses from episodic memory research, about which brain regions are involved in the different subprocesses of small-space episodic OLM. In addition, we were able to identify new brain regions as well as important research gaps.

  8. The use of forward looking infrared to locate bird carcasses in agricultural areas

    USGS Publications Warehouse

    Healy, J.M.

    2001-01-01

    Helicopter-mounted Forward Looking Infrared has mainly been used for large animal censuses. I examined the use of this instrument in locating bird carcasses in agricultural fields to improve current carcass searching techniques. Mallard (Arias platyrhynchos) and northern bobwhite quail (Colinus virginianus) carcasses were measured with an infrared thermometer immediately following death and for 5 consecutive nights to determine the optimal time for detection. Preliminary flights were conducted to design a protocol that was used in test flights. Bird species (mallard versus quail) and cover type (bare ground versus short grass) were compared in the flights. Carcasses were recovered with the aid of Global Positioning Systems. Carcasses remained above ambient ground temperatures for all or part of night 1. Quail carcass temperatures decreased faster than mallard carcasses. In warmer weather, carcass temperatures increased 3-5 nights following death. In colder weather, carcasses were 1-2 C cooler than the ground after the first night. Mallard and quail carcasses were both detected on bare ground and short grass cover types with Forward Looking Infrared. The carcass recovery rates were 40% arid 30% on bare ground and short grass, respectively. There were no significant differences in detection for species or cover type. In warmer weather, carcasses could be detected for several hours following death and again 3-5 nights after death. Carcasses may be detected as objects cooler than the ground in colder weather. Forward Looking Infrared was successful in detecting mallard and quail carcasses. Further research should evaluate improved mapping techniques to enhance carcass recovery.

  9. High resolution mapping of development in the wildland-urban interface using object based image extraction.

    PubMed

    Caggiano, Michael D; Tinkham, Wade T; Hoffman, Chad; Cheng, Antony S; Hawbaker, Todd J

    2016-10-01

    The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m 2 ) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA approach to extract highly detailed data on building locations in a WUI setting.

  10. High resolution mapping of development in the wildland-urban interface using object based image extraction

    USGS Publications Warehouse

    Caggiano, Michael D.; Tinkham, Wade T.; Hoffman, Chad; Cheng, Antony S.; Hawbaker, Todd J.

    2016-01-01

    The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m2) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA approach to extract highly detailed data on building locations in a WUI setting.

  11. Preliminary study of near surface detections at geothermal field using optic and SAR imageries

    NASA Astrophysics Data System (ADS)

    Kurniawahidayati, Beta; Agoes Nugroho, Indra; Syahputra Mulyana, Reza; Saepuloh, Asep

    2017-12-01

    Current remote sensing technologies shows that surface manifestation of geothermal system could be detected with optical and SAR remote sensing, but to assess target beneath near the surface layer with the surficial method needs a further study. This study conducts a preliminary result using Optic and SAR remote sensing imagery to detect near surface geothermal manifestation at and around Mt. Papandayan, West Java, Indonesia. The data used in this study were Landsat-8 OLI/TIRS for delineating geothermal manifestation prospect area and an Advanced Land Observing Satellite(ALOS) Phased Array type L-band Synthetic Aperture Radar (PALSAR) level 1.1 for extracting lineaments and their density. An assumption was raised that the lineaments correlated with near surface structures due to long L-band wavelength about 23.6 cm. Near surface manifestation prospect area are delineated using visual comparison between Landsat 8 RGB True Colour Composite band 4,3,2 (TCC), False Colour Composite band 5,6,7 (FCC), and lineament density map of ALOS PALSAR. Visual properties of ground object were distinguished from interaction of the electromagnetic radiation and object whether it reflect, scatter, absorb, or and emit electromagnetic radiation based on characteristic of their molecular composition and their macroscopic scale and geometry. TCC and FCC composite bands produced 6 and 7 surface manifestation zones according to its visual classification, respectively. Classified images were then compared to a Normalized Different Vegetation Index (NDVI) to obtain the influence of vegetation at the ground surface to the image. Geothermal area were classified based on vegetation index from NDVI. TCC image is more sensitive to the vegetation than FCC image. The later composite produced a better result for identifying visually geothermal manifestation showed by detail-detected zones. According to lineament density analysis high density area located on the peak of Papandayan overlaid with zone 1 and 2 of FCC. Comparing to the extracted lineament density, we interpreted that the near surface manifestation is located at zone 1 and 2 of FCC image.

  12. Processing the presence, placement, and properties of a distractor in spatial language tasks.

    PubMed

    Carlson, Laura A; Hill, Patrick L

    2008-03-01

    A common way to describe the location of an object is to spatially relate it to a nearby object. For such descriptions, the object being described is referred to as the located object; the object to which it is spatially related is referred to as the reference object. Typically, however, there are many nearby objects (distractors), resulting in the need for selection. We report three experiments that examine the extent to which a distractor in the display is processed during the selection of a reference object. Using acceptability ratings and production measures, we show that the presence and the placement ofa distractor have a significant impact on the assessment of the spatial relation between the located and reference objects; there is also evidence that the properties of the distractor are processed, but only under limited conditions. One implication is that the dimension that is most relevant to reference object selection is its spatial relation to the located object, rather than its salience with respect to other objects in the display.

  13. HERBIG-HARO OBJECTS IN THE LUPUS I AND III MOLECULAR CLOUDS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Hongchi; Henning, Thomas

    2009-10-15

    We performed a deep search for Herbig-Haro (HH) objects toward the Lupus I and III clouds, covering a sky area of {approx} 1 and {approx} 0.5 deg{sup 2}, respectively. In total, 11 new HH objects, HH 981--991, are discovered. The HH objects both in Lupus I and in Lupus III tend to be concentrated in small areas. The HH objects detected in Lupus I are located in a region of radius 0.26 pc near the young star Sz 68. The abundance of HH objects shows that this region of the cloud is active in on-going star formation. HH objects inmore » the Lup III cloud are concentrated in the central part of the cloud around the Herbig Ae/Be stars HR 5999 and 6000. HH 981 and 982 in Lupus I are probably driven by the young brown dwarf SSTc2d J154457.9-342340 which has a mass of 50 M{sub J} . HH 990 and 991 in Lup III align well with the HH 600 jet emanating from the low-mass star Par-Lup3-4, and are probably excited by this low-mass star of spectral type M5. High proper motions for HH 228 W, E, and E2 are measured, which confirms that they are excited by the young star Th 28. In contrast, HH 78 exhibits no measurable proper motion in the time span of 18 years, indicating that HH 78 is unlikely part of the HH 228 flow. The HH objects in Lup I and III are generally weak in terms of brightness and dimension in comparison to HH objects we detected with the same technique in the R CrA and Cha I clouds. Through a comparison with the survey results from the Spitzer c2d program, we find that our optical survey is more sensitive, in terms of detection rate, than the Spitzer IRAC survey to high-velocity outflows in the Lup I and III clouds.« less

  14. Generalized local emission tomography

    DOEpatents

    Katsevich, Alexander J.

    1998-01-01

    Emission tomography enables locations and values of internal isotope density distributions to be determined from radiation emitted from the whole object. In the method for locating the values of discontinuities, the intensities of radiation emitted from either the whole object or a region of the object containing the discontinuities are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the isotope density discontinuity. The asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) is determined in a neighborhood of S, and the value for the discontinuity is estimated from the asymptotic behavior of .function..sub..LAMBDA..sup.(.PHI.) knowing pointwise values of the attenuation coefficient within the object. In the method for determining the location of the discontinuity, the intensities of radiation emitted from an object are inputted to a local tomography function .function..sub..LAMBDA..sup.(.PHI.) to define the location S of the density discontinuity and the location .GAMMA. of the attenuation coefficient discontinuity. Pointwise values of the attenuation coefficient within the object need not be known in this case.

  15. The Role of Local and Distal Landmarks in the Development of Object Location Memory

    ERIC Educational Resources Information Center

    Bullens, Jessie; Klugkist, Irene; Postma, Albert

    2011-01-01

    To locate objects in the environment, animals and humans use visual and nonvisual information. We were interested in children's ability to relocate an object on the basis of self-motion and local and distal color cues for orientation. Five- to 9-year-old children were tested on an object location memory task in which, between presentation and…

  16. Object recognition and pose estimation of planar objects from range data

    NASA Technical Reports Server (NTRS)

    Pendleton, Thomas W.; Chien, Chiun Hong; Littlefield, Mark L.; Magee, Michael

    1994-01-01

    The Extravehicular Activity Helper/Retriever (EVAHR) is a robotic device currently under development at the NASA Johnson Space Center that is designed to fetch objects or to assist in retrieving an astronaut who may have become inadvertently de-tethered. The EVAHR will be required to exhibit a high degree of intelligent autonomous operation and will base much of its reasoning upon information obtained from one or more three-dimensional sensors that it will carry and control. At the highest level of visual cognition and reasoning, the EVAHR will be required to detect objects, recognize them, and estimate their spatial orientation and location. The recognition phase and estimation of spatial pose will depend on the ability of the vision system to reliably extract geometric features of the objects such as whether the surface topologies observed are planar or curved and the spatial relationships between the component surfaces. In order to achieve these tasks, three-dimensional sensing of the operational environment and objects in the environment will therefore be essential. One of the sensors being considered to provide image data for object recognition and pose estimation is a phase-shift laser scanner. The characteristics of the data provided by this scanner have been studied and algorithms have been developed for segmenting range images into planar surfaces, extracting basic features such as surface area, and recognizing the object based on the characteristics of extracted features. Also, an approach has been developed for estimating the spatial orientation and location of the recognized object based on orientations of extracted planes and their intersection points. This paper presents some of the algorithms that have been developed for the purpose of recognizing and estimating the pose of objects as viewed by the laser scanner, and characterizes the desirability and utility of these algorithms within the context of the scanner itself, considering data quality and noise.

  17. Multi-Array Detection, Association and Location of Infrasound and Seismo-Acoustic Events in Utah

    DTIC Science & Technology

    2008-09-30

    techniques for detecting , associating, and locating infrasound signals at single and multiple arrays and then combining the processed results with...was detected and located by both infrasound and seismic instruments (Figure 3). Infrasound signals at all three arrays , from one of the explosions, are...COVERED (From - To) 30-Sep-2008 REPRINT 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER MULTI- ARRAY DETECTION , ASSOCIATION AND LOCATION OF INFRASOUND FA8718

  18. Visual awareness of objects and their colour.

    PubMed

    Pilling, Michael; Gellatly, Angus

    2011-10-01

    At any given moment, our awareness of what we 'see' before us seems to be rather limited. If, for instance, a display containing multiple objects is shown (red or green disks), when one object is suddenly covered at random, observers are often little better than chance in reporting about its colour (Wolfe, Reinecke, & Brawn, Visual Cognition, 14, 749-780, 2006). We tested whether, when object attributes (such as colour) are unknown, observers still retain any knowledge of the presence of that object at a display location. Experiments 1-3 involved a task requiring two-alternative (yes/no) responses about the presence or absence of a colour-defined object at a probed location. On this task, if participants knew about the presence of an object at a location, responses indicated that they also knew about its colour. A fourth experiment presented the same displays but required a three-alternative response. This task did result in a data pattern consistent with participants' knowing more about the locations of objects within a display than about their individual colours. However, this location advantage, while highly significant, was rather small in magnitude. Results are compared with those of Huang (Journal of Vision, 10(10, Art. 24), 1-17, 2010), who also reported an advantage for object locations, but under quite different task conditions.

  19. Integration of World Knowledge and Temporary Information about Changes in an Object's Environmental Location during Different Stages of Sentence Comprehension

    PubMed Central

    Chen, Xuqian; Yang, Wei; Ma, Lijun; Li, Jiaxin

    2018-01-01

    Recent findings have shown that information about changes in an object's environmental location in the context of discourse is stored in working memory during sentence comprehension. However, in these studies, changes in the object's location were always consistent with world knowledge (e.g., in “The writer picked up the pen from the floor and moved it to the desk,” the floor and the desk are both common locations for a pen). How do people accomplish comprehension when the object-location information in working memory is inconsistent with world knowledge (e.g., a pen being moved from the floor to the bathtub)? In two visual world experiments, with a “look-and-listen” task, we used eye-tracking data to investigate comprehension of sentences that described location changes under different conditions of appropriateness (i.e., the object and its location were typically vs. unusually coexistent, based on world knowledge) and antecedent context (i.e., contextual information that did vs. did not temporarily normalize unusual coexistence between object and location). Results showed that listeners' retrieval of the critical location was affected by both world knowledge and working memory, and the effect of world knowledge was reduced when the antecedent context normalized unusual coexistence of object and location. More importantly, activation of world knowledge and working memory seemed to change during the comprehension process. These results are important because they demonstrate that interference between world knowledge and information in working memory, appears to be activated dynamically during sentence comprehension. PMID:29520249

  20. Integration of World Knowledge and Temporary Information about Changes in an Object's Environmental Location during Different Stages of Sentence Comprehension.

    PubMed

    Chen, Xuqian; Yang, Wei; Ma, Lijun; Li, Jiaxin

    2018-01-01

    Recent findings have shown that information about changes in an object's environmental location in the context of discourse is stored in working memory during sentence comprehension. However, in these studies, changes in the object's location were always consistent with world knowledge (e.g., in "The writer picked up the pen from the floor and moved it to the desk," the floor and the desk are both common locations for a pen). How do people accomplish comprehension when the object-location information in working memory is inconsistent with world knowledge (e.g., a pen being moved from the floor to the bathtub)? In two visual world experiments, with a "look-and-listen" task, we used eye-tracking data to investigate comprehension of sentences that described location changes under different conditions of appropriateness (i.e., the object and its location were typically vs. unusually coexistent, based on world knowledge) and antecedent context (i.e., contextual information that did vs. did not temporarily normalize unusual coexistence between object and location). Results showed that listeners' retrieval of the critical location was affected by both world knowledge and working memory, and the effect of world knowledge was reduced when the antecedent context normalized unusual coexistence of object and location. More importantly, activation of world knowledge and working memory seemed to change during the comprehension process. These results are important because they demonstrate that interference between world knowledge and information in working memory, appears to be activated dynamically during sentence comprehension.

Top