Point pattern match-based change detection in a constellation of previously detected objects
Paglieroni, David W.
2016-06-07
A method and system is provided that applies attribute- and topology-based change detection to objects that were detected on previous scans of a medium. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, detection strength, size, elongation, orientation, etc. The locations define a three-dimensional network topology forming a constellation of previously detected objects. The change detection system stores attributes of the previously detected objects in a constellation database. The change detection system detects changes by comparing the attributes and topological consistency of newly detected objects encountered during a new scan of the medium to previously detected objects in the constellation database. The change detection system may receive the attributes of the newly detected objects as the objects are detected by an object detection system in real time.
Attribute and topology based change detection in a constellation of previously detected objects
Paglieroni, David W.; Beer, Reginald N.
2016-01-19
A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.
Imaging, object detection, and change detection with a polarized multistatic GPR array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beer, N. Reginald; Paglieroni, David W.
A polarized detection system performs imaging, object detection, and change detection factoring in the orientation of an object relative to the orientation of transceivers. The polarized detection system may operate on one of several modes of operation based on whether the imaging, object detection, or change detection is performed separately for each transceiver orientation. In combined change mode, the polarized detection system performs imaging, object detection, and change detection separately for each transceiver orientation, and then combines changes across polarizations. In combined object mode, the polarized detection system performs imaging and object detection separately for each transceiver orientation, and thenmore » combines objects across polarizations and performs change detection on the result. In combined image mode, the polarized detection system performs imaging separately for each transceiver orientation, and then combines images across polarizations and performs object detection followed by change detection on the result.« less
Determining root correspondence between previously and newly detected objects
Paglieroni, David W.; Beer, N Reginald
2014-06-17
A system that applies attribute and topology based change detection to networks of objects that were detected on previous scans of a structure, roadway, or area of interest. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, size, elongation, orientation, etc. The topology of the network of previously detected objects is maintained in a constellation database that stores attributes of previously detected objects and implicitly captures the geometrical structure of the network. A change detection system detects change by comparing the attributes and topology of new objects detected on the latest scan to the constellation database of previously detected objects.
A dual-process account of auditory change detection.
McAnally, Ken I; Martin, Russell L; Eramudugolla, Ranmalee; Stuart, Geoffrey W; Irvine, Dexter R F; Mattingley, Jason B
2010-08-01
Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed objects with those predicted by change-detection models based on signal detection theory (SDT) and high-threshold theory (HTT). Detected changes were not identified as accurately as predicted by models based on either theory, suggesting that some changes are detected by a process that does not support change identification. Undetected changes were identified as accurately as predicted by the HTT model but much less accurately than predicted by the SDT models. The process underlying change detection was investigated further by determining receiver-operating characteristics (ROCs). ROCs did not conform to those predicted by either a SDT or a HTT model but were well modeled by a dual-process that incorporated HTT and SDT components. The dual-process model also accurately predicted the rates at which detected and undetected changes were correctly identified.
NASA Astrophysics Data System (ADS)
Wang, G. H.; Wang, H. B.; Fan, W. F.; Liu, Y.; Chen, C.
2018-04-01
In view of the traditional change detection algorithm mainly depends on the spectral information image spot, failed to effectively mining and fusion of multi-image feature detection advantage, the article borrows the ideas of object oriented analysis proposed a multi feature fusion of remote sensing image change detection algorithm. First by the multi-scale segmentation of image objects based; then calculate the various objects of color histogram and linear gradient histogram; utilizes the color distance and edge line feature distance between EMD statistical operator in different periods of the object, using the adaptive weighted method, the color feature distance and edge in a straight line distance of combination is constructed object heterogeneity. Finally, the curvature histogram analysis image spot change detection results. The experimental results show that the method can fully fuse the color and edge line features, thus improving the accuracy of the change detection.
Change detection from remotely sensed images: From pixel-based to object-based approaches
NASA Astrophysics Data System (ADS)
Hussain, Masroor; Chen, Dongmei; Cheng, Angela; Wei, Hui; Stanley, David
2013-06-01
The appetite for up-to-date information about earth's surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection.
NASA Astrophysics Data System (ADS)
Zhao, Z.
2011-12-01
Changes in ice sheet and floating ices around that have great significance for global change research. In the context of global warming, rapidly changing of Antarctic continental margin, caving of ice shelves, movement of iceberg are all closely related to climate change and ocean circulation. Using automatic change detection technology to rapid positioning the melting Region of Polar ice sheet and the location of ice drift would not only strong support for Global Change Research but also lay the foundation for establishing early warning mechanism for melting of the polar ice and Ice displacement. This paper proposed an automatic change detection method using object-based segmentation technology. The process includes three parts: ice extraction using image segmentation, object-baed ice tracking, change detection based on similarity matching. An approach based on similarity matching of eigenvector is proposed in this paper, which used area, perimeter, Hausdorff distance, contour, shape and other information of each ice-object. Different time of LANDSAT ETM+ data, Chinese environment disaster satellite HJ1B date, MODIS 1B date are used to detect changes of Floating ice at Antarctic continental margin respectively. We select different time of ETM+ data(January 7, 2003 and January 16, 2003) with the area around Antarctic continental margin near the Lazarev Bay, which is from 70.27454853 degrees south latitude, longitude 12.38573410 degrees to 71.44474167 degrees south latitude, longitude 10.39252222 degrees,included 11628 sq km of Antarctic continental margin area, as a sample. Then we can obtain the area of floating ices reduced 371km2, and the number of them reduced 402 during the time. In addition, the changes of all the floating ices around the margin region of Antarctic within 1200 km are detected using MODIS 1B data. During the time from January 1, 2008 to January 7, 2008, the floating ice area decreased by 21644732 km2, and the number of them reduced by 83080. The results show that the object-based information extraction algorithm can obtain more precise details of a single object, while the change detection method based on similarity matching can effectively tracking the change of floating ice.
A habituation based approach for detection of visual changes in surveillance camera
NASA Astrophysics Data System (ADS)
Sha'abani, M. N. A. H.; Adan, N. F.; Sabani, M. S. M.; Abdullah, F.; Nadira, J. H. S.; Yasin, M. S. M.
2017-09-01
This paper investigates a habituation based approach in detecting visual changes using video surveillance systems in a passive environment. Various techniques have been introduced for dynamic environment such as motion detection, object classification and behaviour analysis. However, in a passive environment, most of the scenes recorded by the surveillance system are normal. Therefore, implementing a complex analysis all the time in the passive environment resulting on computationally expensive, especially when using a high video resolution. Thus, a mechanism of attention is required, where the system only responds to an abnormal event. This paper proposed a novelty detection mechanism in detecting visual changes and a habituation based approach to measure the level of novelty. The objective of the paper is to investigate the feasibility of the habituation based approach in detecting visual changes. Experiment results show that the approach are able to accurately detect the presence of novelty as deviations from the learned knowledge.
Illumination Invariant Change Detection (iicd): from Earth to Mars
NASA Astrophysics Data System (ADS)
Wan, X.; Liu, J.; Qin, M.; Li, S. Y.
2018-04-01
Multi-temporal Earth Observation and Mars orbital imagery data with frequent repeat coverage provide great capability for planetary surface change detection. When comparing two images taken at different times of day or in different seasons for change detection, the variation of topographic shades and shadows caused by the change of sunlight angle can be so significant that it overwhelms the real object and environmental changes, making automatic detection unreliable. An effective change detection algorithm therefore has to be robust to the illumination variation. This paper presents our research on developing and testing an Illumination Invariant Change Detection (IICD) method based on the robustness of phase correlation (PC) to the variation of solar illumination for image matching. The IICD is based on two key functions: i) initial change detection based on a saliency map derived from pixel-wise dense PC matching and ii) change quantization which combines change type identification, motion estimation and precise appearance change identification. Experiment using multi-temporal Landsat 7 ETM+ satellite images, Rapid eye satellite images and Mars HiRiSE images demonstrate that our frequency based image matching method can reach sub-pixel accuracy and thus the proposed IICD method can effectively detect and precisely segment large scale change such as landslide as well as small object change such as Mars rover, under daily and seasonal sunlight changes.
Object-based change detection method using refined Markov random field
NASA Astrophysics Data System (ADS)
Peng, Daifeng; Zhang, Yongjun
2017-01-01
In order to fully consider the local spatial constraints between neighboring objects in object-based change detection (OBCD), an OBCD approach is presented by introducing a refined Markov random field (MRF). First, two periods of images are stacked and segmented to produce image objects. Second, object spectral and textual histogram features are extracted and G-statistic is implemented to measure the distance among different histogram distributions. Meanwhile, object heterogeneity is calculated by combining spectral and textual histogram distance using adaptive weight. Third, an expectation-maximization algorithm is applied for determining the change category of each object and the initial change map is then generated. Finally, a refined change map is produced by employing the proposed refined object-based MRF method. Three experiments were conducted and compared with some state-of-the-art unsupervised OBCD methods to evaluate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method obtains the highest accuracy among the methods used in this paper, which confirms its validness and effectiveness in OBCD.
Buildings Change Detection Based on Shape Matching for Multi-Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Abdessetar, M.; Zhong, Y.
2017-09-01
Buildings change detection has the ability to quantify the temporal effect, on urban area, for urban evolution study or damage assessment in disaster cases. In this context, changes analysis might involve the utilization of the available satellite images with different resolutions for quick responses. In this paper, to avoid using traditional method with image resampling outcomes and salt-pepper effect, building change detection based on shape matching is proposed for multi-resolution remote sensing images. Since the object's shape can be extracted from remote sensing imagery and the shapes of corresponding objects in multi-scale images are similar, it is practical for detecting buildings changes in multi-scale imagery using shape analysis. Therefore, the proposed methodology can deal with different pixel size for identifying new and demolished buildings in urban area using geometric properties of objects of interest. After rectifying the desired multi-dates and multi-resolutions images, by image to image registration with optimal RMS value, objects based image classification is performed to extract buildings shape from the images. Next, Centroid-Coincident Matching is conducted, on the extracted building shapes, based on the Euclidean distance measurement between shapes centroid (from shape T0 to shape T1 and vice versa), in order to define corresponding building objects. Then, New and Demolished buildings are identified based on the obtained distances those are greater than RMS value (No match in the same location).
A comparison of moving object detection methods for real-time moving object detection
NASA Astrophysics Data System (ADS)
Roshan, Aditya; Zhang, Yun
2014-06-01
Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.
Hou, Bin; Wang, Yunhong; Liu, Qingjie
2016-01-01
Characterizations of up to date information of the Earth’s surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD) methods have been developed to solve them by utilizing remote sensing (RS) images. The advent of high resolution (HR) remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs) allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC) segmentation. Then, saliency and morphological building index (MBI) extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF). Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation. PMID:27618903
Hou, Bin; Wang, Yunhong; Liu, Qingjie
2016-08-27
Characterizations of up to date information of the Earth's surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD) methods have been developed to solve them by utilizing remote sensing (RS) images. The advent of high resolution (HR) remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs) allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC) segmentation. Then, saliency and morphological building index (MBI) extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF). Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation.
Mixing geometric and radiometric features for change classification
NASA Astrophysics Data System (ADS)
Fournier, Alexandre; Descombes, Xavier; Zerubia, Josiane
2008-02-01
Most basic change detection algorithms use a pixel-based approach. Whereas such approach is quite well defined for monitoring important area changes (such as urban growth monitoring) in low resolution images, an object based approach seems more relevant when the change detection is specifically aimed toward targets (such as small buildings and vehicles). In this paper, we present an approach that mixes radiometric and geometric features to qualify the changed zones. The goal is to establish bounds (appearance, disappearance, substitution ...) between the detected changes and the underlying objects. We proceed by first clustering the change map (containing each pixel bitemporal radiosity) in different classes using the entropy-kmeans algorithm. Assuming that most man-made objects have a polygonal shape, a polygonal approximation algorithm is then used in order to characterize the resulting zone shapes. Hence allowing us to refine the primary rough classification, by integrating the polygon orientations in the state space. Tests are currently conducted on Quickbird data.
Object-based change detection: dimension of damage in residential areas of Abu Suruj, Sudan
NASA Astrophysics Data System (ADS)
Demharter, Timo; Michel, Ulrich; Ehlers, Manfred; Reinartz, Peter
2011-11-01
Given the importance of Change Detection, especially in the field of crisis management, this paper discusses the advantage of object-based Change Detection. This project and the used methods give an opportunity to coordinate relief actions strategically. The principal objective of this project was to develop an algorithm which allows to detect rapidly damaged and destroyed buildings in the area of Abu Suruj. This Sudanese village is located in West-Darfur and has become the victim of civil war. The software eCognition Developer was used to per-form an object-based Change Detection on two panchromatic Quickbird 2 images from two different time slots. The first image shows the area before, the second image shows the area after the massacres in this region. Seeking a classification for the huts of the Sudanese town Abu Suruj was reached by first segmenting the huts and then classifying them on the basis of geo-metrical and brightness-related values. The huts were classified as "new", "destroyed" and "preserved" with the help of a automated algorithm. Finally the results were presented in the form of a map which displays the different conditions of the huts. The accuracy of the project is validated by an accuracy assessment resulting in an Overall Classification Accuracy of 90.50 percent. These change detection results allow aid organizations to provide quick and efficient help where it is needed the most.
NASA Astrophysics Data System (ADS)
de Alwis Pitts, Dilkushi A.; So, Emily
2017-12-01
The availability of Very High Resolution (VHR) optical sensors and a growing image archive that is frequently updated, allows the use of change detection in post-disaster recovery and monitoring for robust and rapid results. The proposed semi-automated GIS object-based method uses readily available pre-disaster GIS data and adds existing knowledge into the processing to enhance change detection. It also allows targeting specific types of changes pertaining to similar man-made objects such as buildings and critical facilities. The change detection method is based on pre/post normalized index, gradient of intensity, texture and edge similarity filters within the object and a set of training data. More emphasis is put on the building edges to capture the structural damage in quantifying change after disaster. Once the change is quantified, based on training data, the method can be used automatically to detect change in order to observe recovery over time in potentially large areas. Analysis over time can also contribute to obtaining a full picture of the recovery and development after disaster, thereby giving managers a better understanding of productive management and recovery practices. The recovery and monitoring can be analyzed using the index in zones extending from to epicentre of disaster or administrative boundaries over time.
NASA Astrophysics Data System (ADS)
Feizizadeh, Bakhtiar; Blaschke, Thomas; Tiede, Dirk; Moghaddam, Mohammad Hossein Rezaei
2017-09-01
This article presents a method of object-based image analysis (OBIA) for landslide delineation and landslide-related change detection from multi-temporal satellite images. It uses both spatial and spectral information on landslides, through spectral analysis, shape analysis, textural measurements using a gray-level co-occurrence matrix (GLCM), and fuzzy logic membership functionality. Following an initial segmentation step, particular combinations of various information layers were investigated to generate objects. This was achieved by applying multi-resolution segmentation to IRS-1D, SPOT-5, and ALOS satellite imagery in sequential steps of feature selection and object classification, and using slope and flow direction derivatives from a digital elevation model together with topographically-oriented gray level co-occurrence matrices. Fuzzy membership values were calculated for 11 different membership functions using 20 landslide objects from a landslide training data. Six fuzzy operators were used for the final classification and the accuracies of the resulting landslide maps were compared. A Fuzzy Synthetic Evaluation (FSE) approach was adapted for validation of the results and for an accuracy assessment using the landslide inventory database. The FSE approach revealed that the AND operator performed best with an accuracy of 93.87% for 2005 and 94.74% for 2011, closely followed by the MEAN Arithmetic operator, while the OR and AND (*) operators yielded relatively low accuracies. An object-based change detection was then applied to monitor landslide-related changes that occurred in northern Iran between 2005 and 2011. Knowledge rules to detect possible landslide-related changes were developed by evaluating all possible landslide-related objects for both time steps.
NASA Astrophysics Data System (ADS)
Işık, Şahin; Özkan, Kemal; Günal, Serkan; Gerek, Ömer Nezih
2018-03-01
Change detection with background subtraction process remains to be an unresolved issue and attracts research interest due to challenges encountered on static and dynamic scenes. The key challenge is about how to update dynamically changing backgrounds from frames with an adaptive and self-regulated feedback mechanism. In order to achieve this, we present an effective change detection algorithm for pixelwise changes. A sliding window approach combined with dynamic control of update parameters is introduced for updating background frames, which we called sliding window-based change detection. Comprehensive experiments on related test videos show that the integrated algorithm yields good objective and subjective performance by overcoming illumination variations, camera jitters, and intermittent object motions. It is argued that the obtained method makes a fair alternative in most types of foreground extraction scenarios; unlike case-specific methods, which normally fail for their nonconsidered scenarios.
NASA Technical Reports Server (NTRS)
Oommen, Thomas; Rebbapragada, Umaa; Cerminaro, Daniel
2012-01-01
In this study, we perform a case study on imagery from the Haiti earthquake that evaluates a novel object-based approach for characterizing earthquake induced surface effects of liquefaction against a traditional pixel based change technique. Our technique, which combines object-oriented change detection with discriminant/categorical functions, shows the power of distinguishing earthquake-induced surface effects from changes in buildings using the object properties concavity, convexity, orthogonality and rectangularity. Our results suggest that object-based analysis holds promise in automatically extracting earthquake-induced damages from high-resolution aerial/satellite imagery.
Detailed sensory memory, sloppy working memory.
Sligte, Ilja G; Vandenbroucke, Annelinde R E; Scholte, H Steven; Lamme, Victor A F
2010-01-01
Visual short-term memory (VSTM) enables us to actively maintain information in mind for a brief period of time after stimulus disappearance. According to recent studies, VSTM consists of three stages - iconic memory, fragile VSTM, and visual working memory - with increasingly stricter capacity limits and progressively longer lifetimes. Still, the resolution (or amount of visual detail) of each VSTM stage has remained unexplored and we test this in the present study. We presented people with a change detection task that measures the capacity of all three forms of VSTM, and we added an identification display after each change trial that required people to identify the "pre-change" object. Accurate change detection plus pre-change identification requires subjects to have a high-resolution representation of the "pre-change" object, whereas change detection or identification only can be based on the hunch that something has changed, without exactly knowing what was presented before. We observed that people maintained 6.1 objects in iconic memory, 4.6 objects in fragile VSTM, and 2.1 objects in visual working memory. Moreover, when people detected the change, they could also identify the pre-change object on 88% of the iconic memory trials, on 71% of the fragile VSTM trials and merely on 53% of the visual working memory trials. This suggests that people maintain many high-resolution representations in iconic memory and fragile VSTM, but only one high-resolution object representation in visual working memory.
2012-05-18
by the AWAC. It is a surface- penetrating device that measures continuous changes in the water elevations over time at much higher sampling rates of...background subtraction, a technique based on detecting change from a background scene. Their study highlights the difficulty in object detection and tracking...movements (Zhang et al. 2009) Alternatively, another common object detection method , known as Optical Flow Analysis , may be utilized for vessel
Object memory and change detection: dissociation as a function of visual and conceptual similarity.
Yeh, Yei-Yu; Yang, Cheng-Ta
2008-01-01
People often fail to detect a change between two visual scenes, a phenomenon referred to as change blindness. This study investigates how a post-change object's similarity to the pre-change object influences memory of the pre-change object and affects change detection. The results of Experiment 1 showed that similarity lowered detection sensitivity but did not affect the speed of identifying the pre-change object, suggesting that similarity between the pre- and post-change objects does not degrade the pre-change representation. Identification speed for the pre-change object was faster than naming the new object regardless of detection accuracy. Similarity also decreased detection sensitivity in Experiment 2 but improved the recognition of the pre-change object under both correct detection and detection failure. The similarity effect on recognition was greatly reduced when 20% of each pre-change stimulus was masked by random dots in Experiment 3. Together the results suggest that the level of pre-change representation under detection failure is equivalent to the level under correct detection and that the pre-change representation is almost complete. Similarity lowers detection sensitivity but improves explicit access in recognition. Dissociation arises between recognition and change detection as the two judgments rely on the match-to-mismatch signal and mismatch-to-match signal, respectively.
Angelone, Bonnie L; Levin, Daniel T; Simons, Daniel J
2003-01-01
Observers typically detect changes to central objects more readily than changes to marginal objects, but they sometimes miss changes to central, attended objects as well. However, even if observers do not report such changes, they may be able to recognize the changed object. In three experiments we explored change detection and recognition memory for several types of changes to central objects in motion pictures. Observers who failed to detect a change still performed at above chance levels on a recognition task in almost all conditions. In addition, observers who detected the change were no more accurate in their recognition than those who did not detect the change. Despite large differences in the detectability of changes across conditions, those observers who missed the change did not vary in their ability to recognize the changing object.
Object-Based Change Detection Using High-Resolution Remotely Sensed Data and GIS
NASA Astrophysics Data System (ADS)
Sofina, N.; Ehlers, M.
2012-08-01
High resolution remotely sensed images provide current, detailed, and accurate information for large areas of the earth surface which can be used for change detection analyses. Conventional methods of image processing permit detection of changes by comparing remotely sensed multitemporal images. However, for performing a successful analysis it is desirable to take images from the same sensor which should be acquired at the same time of season, at the same time of a day, and - for electro-optical sensors - in cloudless conditions. Thus, a change detection analysis could be problematic especially for sudden catastrophic events. A promising alternative is the use of vector-based maps containing information about the original urban layout which can be related to a single image obtained after the catastrophe. The paper describes a methodology for an object-based search of destroyed buildings as a consequence of a natural or man-made catastrophe (e.g., earthquakes, flooding, civil war). The analysis is based on remotely sensed and vector GIS data. It includes three main steps: (i) generation of features describing the state of buildings; (ii) classification of building conditions; and (iii) data import into a GIS. One of the proposed features is a newly developed 'Detected Part of Contour' (DPC). Additionally, several features based on the analysis of textural information corresponding to the investigated vector objects are calculated. The method is applied to remotely sensed images of areas that have been subjected to an earthquake. The results show the high reliability of the DPC feature as an indicator for change.
Change-based threat detection in urban environments with a forward-looking camera
NASA Astrophysics Data System (ADS)
Morton, Kenneth, Jr.; Ratto, Christopher; Malof, Jordan; Gunter, Michael; Collins, Leslie; Torrione, Peter
2012-06-01
Roadside explosive threats continue to pose a significant risk to soldiers and civilians in conflict areas around the world. These objects are easy to manufacture and procure, but due to their ad hoc nature, they are difficult to reliably detect using standard sensing technologies. Although large roadside explosive hazards may be difficult to conceal in rural environments, urban settings provide a much more complicated background where seemingly innocuous objects (e.g., piles of trash, roadside debris) may be used to obscure threats. Since direct detection of all innocuous objects would flag too many objects to be of use, techniques must be employed to reduce the number of alarms generated and highlight only a limited subset of possibly threatening regions for the user. In this work, change detection techniques are used to reduce false alarm rates and increase detection capabilities for possible threat identification in urban environments. The proposed model leverages data from multiple video streams collected over the same regions by first applying video aligning and then using various distance metrics to detect changes based on image keypoints in the video streams. Data collected at an urban warfare simulation range at an Eastern US test site was used to evaluate the proposed approach, and significant reductions in false alarm rates compared to simpler techniques are illustrated.
Benedek, C; Descombes, X; Zerubia, J
2012-01-01
In this paper, we introduce a new probabilistic method which integrates building extraction with change detection in remotely sensed image pairs. A global optimization process attempts to find the optimal configuration of buildings, considering the observed data, prior knowledge, and interactions between the neighboring building parts. We present methodological contributions in three key issues: 1) We implement a novel object-change modeling approach based on Multitemporal Marked Point Processes, which simultaneously exploits low-level change information between the time layers and object-level building description to recognize and separate changed and unaltered buildings. 2) To answer the challenges of data heterogeneity in aerial and satellite image repositories, we construct a flexible hierarchical framework which can create various building appearance models from different elementary feature-based modules. 3) To simultaneously ensure the convergence, optimality, and computation complexity constraints raised by the increased data quantity, we adopt the quick Multiple Birth and Death optimization technique for change detection purposes, and propose a novel nonuniform stochastic object birth process which generates relevant objects with higher probability based on low-level image features.
Weiqi Zhou; Austin Troy; Morgan Grove
2008-01-01
Accurate and timely information about land cover pattern and change in urban areas is crucial for urban land management decision-making, ecosystem monitoring and urban planning. This paper presents the methods and results of an object-based classification and post-classification change detection of multitemporal high-spatial resolution Emerge aerial imagery in the...
Leising, Kenneth J; Elmore, L Caitlin; Rivera, Jacquelyne J; Magnotti, John F; Katz, Jeffrey S; Wright, Anthony A
2013-09-01
Change detection is commonly used to assess capacity (number of objects) of human visual short-term memory (VSTM). Comparisons with the performance of non-human animals completing similar tasks have shown similarities and differences in object-based VSTM, which is only one aspect ("what") of memory. Another important aspect of memory, which has received less attention, is spatial short-term memory for "where" an object is in space. In this article, we show for the first time that a monkey and pigeons can be accurately trained to identify location changes, much as humans do, in change detection tasks similar to those used to test object capacity of VSTM. The subject's task was to identify (touch/peck) an item that changed location across a brief delay. Both the monkey and pigeons showed transfer to delays longer than the training delay, to greater and smaller distance changes than in training, and to novel colors. These results are the first to demonstrate location-change detection in any non-human species and encourage comparative investigations into the nature of spatial and visual short-term memory.
NASA Astrophysics Data System (ADS)
Luo, Aiwen; An, Fengwei; Zhang, Xiangyu; Chen, Lei; Huang, Zunkai; Jürgen Mattausch, Hans
2018-04-01
Feature extraction techniques are a cornerstone of object detection in computer-vision-based applications. The detection performance of vison-based detection systems is often degraded by, e.g., changes in the illumination intensity of the light source, foreground-background contrast variations or automatic gain control from the camera. In order to avoid such degradation effects, we present a block-based L1-norm-circuit architecture which is configurable for different image-cell sizes, cell-based feature descriptors and image resolutions according to customization parameters from the circuit input. The incorporated flexibility in both the image resolution and the cell size for multi-scale image pyramids leads to lower computational complexity and power consumption. Additionally, an object-detection prototype for performance evaluation in 65 nm CMOS implements the proposed L1-norm circuit together with a histogram of oriented gradients (HOG) descriptor and a support vector machine (SVM) classifier. The proposed parallel architecture with high hardware efficiency enables real-time processing, high detection robustness, small chip-core area as well as low power consumption for multi-scale object detection.
Detecting impossible changes in infancy: a three-system account
Wang, Su-hua; Baillargeon, Renée
2012-01-01
Can infants detect that an object has magically disappeared, broken apart or changed color while briefly hidden? Recent research suggests that infants detect some but not other ‘impossible’ changes; and that various contextual manipulations can induce infants to detect changes they would not otherwise detect. We present an account that includes three systems: a physical-reasoning, an object-tracking, and an object-representation system. What impossible changes infants detect depends on what object information is included in the physical-reasoning system; this information becomes subject to a principle of persistence, which states that objects can undergo no spontaneous or uncaused change. What contextual manipulations induce infants to detect impossible changes depends on complex interplays between the physical-reasoning system and the object-tracking and object-representation systems. PMID:18078778
Chen, Qiang; Chen, Yunhao; Jiang, Weiguo
2016-07-30
In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm.
[Application of optical flow dynamic texture in land use/cover change detection].
Yan, Li; Gong, Yi-Long; Zhang, Yi; Duan, Wei
2014-11-01
In the present study, a novel change detection approach for high resolution remote sensing images is proposed based on the optical flow dynamic texture (OFDT), which could achieve the land use & land cover change information automatically with a dynamic description of ground-object changes. This paper describes the ground-object gradual change process from the principle using optical flow theory, which breaks the ground-object sudden change hypothesis in remote sensing change detection methods in the past. As the steps of this method are simple, it could be integrated in the systems and software such as Land Resource Management and Urban Planning software that needs to find ground-object changes. This method takes into account the temporal dimension feature between remote sensing images, which provides a richer set of information for remote sensing change detection, thereby improving the status that most of the change detection methods are mainly dependent on the spatial dimension information. In this article, optical flow dynamic texture is the basic reflection of changes, and it is used in high resolution remote sensing image support vector machine post-classification change detection, combined with spectral information. The texture in the temporal dimension which is considered in this article has a smaller amount of data than most of the textures in the spatial dimensions. The highly automated texture computing has only one parameter to set, which could relax the onerous manual evaluation present status. The effectiveness of the proposed approach is evaluated with the 2011 and 2012 QuickBird datasets covering Duerbert Mongolian Autonomous County of Daqing City, China. Then, the effects of different optical flow smooth coefficient and the impact on the description of the ground-object changes in the method are deeply analyzed: The experiment result is satisfactory, with an 87.29% overall accuracy and an 0.850 7 Kappa index, and the method achieves better performance than the post-classification change detection methods using spectral information only.
Examining change detection approaches for tropical mangrove monitoring
Myint, Soe W.; Franklin, Janet; Buenemann, Michaela; Kim, Won; Giri, Chandra
2014-01-01
This study evaluated the effectiveness of different band combinations and classifiers (unsupervised, supervised, object-oriented nearest neighbor, and object-oriented decision rule) for quantifying mangrove forest change using multitemporal Landsat data. A discriminant analysis using spectra of different vegetation types determined that bands 2 (0.52 to 0.6 μm), 5 (1.55 to 1.75 μm), and 7 (2.08 to 2.35 μm) were the most effective bands for differentiating mangrove forests from surrounding land cover types. A ranking of thirty-six change maps, produced by comparing the classification accuracy of twelve change detection approaches, was used. The object-based Nearest Neighbor classifier produced the highest mean overall accuracy (84 percent) regardless of band combinations. The automated decision rule-based approach (mean overall accuracy of 88 percent) as well as a composite of bands 2, 5, and 7 used with the unsupervised classifier and the same composite or all band difference with the object-oriented Nearest Neighbor classifier were the most effective approaches.
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang
2016-06-01
Change detection is an important task when using unmanned aerial vehicles (UAV) for video surveillance. We address changes of short time scale using observations in time distances of a few hours. Each observation (previous and current) is a short video sequence acquired by UAV in near-Nadir view. Relevant changes are, e.g., recently parked or moved vehicles. Examples for non-relevant changes are parallaxes caused by 3D structures of the scene, shadow and illumination changes, and compression or transmission artifacts. In this paper we present (1) a new feature based approach to change detection, (2) a combination with extended image differencing (Saur et al., 2014), and (3) the application to video sequences using temporal filtering. In the feature based approach, information about local image features, e.g., corners, is extracted in both images. The label "new object" is generated at image points, where features occur in the current image and no or weaker features are present in the previous image. The label "vanished object" corresponds to missing or weaker features in the current image and present features in the previous image. This leads to two "directed" change masks and differs from image differencing where only one "undirected" change mask is extracted which combines both label types to the single label "changed object". The combination of both algorithms is performed by merging the change masks of both approaches. A color mask showing the different contributions is used for visual inspection by a human image interpreter.
Reconciling change blindness with long-term memory for objects.
Wood, Katherine; Simons, Daniel J
2017-02-01
How can we reconcile remarkably precise long-term memory for thousands of images with failures to detect changes to similar images? We explored whether people can use detailed, long-term memory to improve change detection performance. Subjects studied a set of images of objects and then performed recognition and change detection tasks with those images. Recognition memory performance exceeded change detection performance, even when a single familiar object in the postchange display consistently indicated the change location. In fact, participants were no better when a familiar object predicted the change location than when the displays consisted of unfamiliar objects. When given an explicit strategy to search for a familiar object as a way to improve performance on the change detection task, they performed no better than in a 6-alternative recognition memory task. Subjects only benefited from the presence of familiar objects in the change detection task when they had more time to view the prechange array before it switched. Once the cost to using the change detection information decreased, subjects made use of it in conjunction with memory to boost performance on the familiar-item change detection task. This suggests that even useful information will go unused if it is sufficiently difficult to extract.
Chen, Qiang; Chen, Yunhao; Jiang, Weiguo
2016-01-01
In the field of multiple features Object-Based Change Detection (OBCD) for very-high-resolution remotely sensed images, image objects have abundant features and feature selection affects the precision and efficiency of OBCD. Through object-based image analysis, this paper proposes a Genetic Particle Swarm Optimization (GPSO)-based feature selection algorithm to solve the optimization problem of feature selection in multiple features OBCD. We select the Ratio of Mean to Variance (RMV) as the fitness function of GPSO, and apply the proposed algorithm to the object-based hybrid multivariate alternative detection model. Two experiment cases on Worldview-2/3 images confirm that GPSO can significantly improve the speed of convergence, and effectively avoid the problem of premature convergence, relative to other feature selection algorithms. According to the accuracy evaluation of OBCD, GPSO is superior at overall accuracy (84.17% and 83.59%) and Kappa coefficient (0.6771 and 0.6314) than other algorithms. Moreover, the sensitivity analysis results show that the proposed algorithm is not easily influenced by the initial parameters, but the number of features to be selected and the size of the particle swarm would affect the algorithm. The comparison experiment results reveal that RMV is more suitable than other functions as the fitness function of GPSO-based feature selection algorithm. PMID:27483285
Land Cover Analysis by Using Pixel-Based and Object-Based Image Classification Method in Bogor
NASA Astrophysics Data System (ADS)
Amalisana, Birohmatin; Rokhmatullah; Hernina, Revi
2017-12-01
The advantage of image classification is to provide earth’s surface information like landcover and time-series changes. Nowadays, pixel-based image classification technique is commonly performed with variety of algorithm such as minimum distance, parallelepiped, maximum likelihood, mahalanobis distance. On the other hand, landcover classification can also be acquired by using object-based image classification technique. In addition, object-based classification uses image segmentation from parameter such as scale, form, colour, smoothness and compactness. This research is aimed to compare the result of landcover classification and its change detection between parallelepiped pixel-based and object-based classification method. Location of this research is Bogor with 20 years range of observation from 1996 until 2016. This region is famous as urban areas which continuously change due to its rapid development, so that time-series landcover information of this region will be interesting.
Liang, Chun; Earl, Brian; Thompson, Ivy; Whitaker, Kayla; Cahn, Steven; Xiang, Jing; Fu, Qian-Jie; Zhang, Fawen
2016-01-01
Objective: The objectives of this study were: (1) to determine if musicians have a better ability to detect frequency changes under quiet and noisy conditions; (2) to use the acoustic change complex (ACC), a type of electroencephalographic (EEG) response, to understand the neural substrates of musician vs. non-musician difference in frequency change detection abilities. Methods: Twenty-four young normal hearing listeners (12 musicians and 12 non-musicians) participated. All participants underwent psychoacoustic frequency detection tests with three types of stimuli: tones (base frequency at 160 Hz) containing frequency changes (Stim 1), tones containing frequency changes masked by low-level noise (Stim 2), and tones containing frequency changes masked by high-level noise (Stim 3). The EEG data were recorded using tones (base frequency at 160 and 1200 Hz, respectively) containing different magnitudes of frequency changes (0, 5, and 50% changes, respectively). The late-latency evoked potential evoked by the onset of the tones (onset LAEP or N1-P2 complex) and that evoked by the frequency change contained in the tone (the acoustic change complex or ACC or N1′-P2′ complex) were analyzed. Results: Musicians significantly outperformed non-musicians in all stimulus conditions. The ACC and onset LAEP showed similarities and differences. Increasing the magnitude of frequency change resulted in increased ACC amplitudes. ACC measures were found to be significantly different between musicians (larger P2′ amplitude) and non-musicians for the base frequency of 160 Hz but not 1200 Hz. Although the peak amplitude in the onset LAEP appeared to be larger and latency shorter in musicians than in non-musicians, the difference did not reach statistical significance. The amplitude of the onset LAEP is significantly correlated with that of the ACC for the base frequency of 160 Hz. Conclusion: The present study demonstrated that musicians do perform better than non-musicians in detecting frequency changes in quiet and noisy conditions. The ACC and onset LAEP may involve different but overlapping neural mechanisms. Significance: This is the first study using the ACC to examine music-training effects. The ACC measures provide an objective tool for documenting musical training effects on frequency detection. PMID:27826221
A change detection method for remote sensing image based on LBP and SURF feature
NASA Astrophysics Data System (ADS)
Hu, Lei; Yang, Hao; Li, Jin; Zhang, Yun
2018-04-01
Finding the change in multi-temporal remote sensing image is important in many the image application. Because of the infection of climate and illumination, the texture of the ground object is more stable relative to the gray in high-resolution remote sensing image. And the texture features of Local Binary Patterns (LBP) and Speeded Up Robust Features (SURF) are outstanding in extracting speed and illumination invariance. A method of change detection for matched remote sensing image pair is present, which compares the similarity by LBP and SURF to detect the change and unchanged of the block after blocking the image. And region growing is adopted to process the block edge zone. The experiment results show that the method can endure some illumination change and slight texture change of the ground object.
Nationwide Hybrid Change Detection of Buildings
NASA Astrophysics Data System (ADS)
Hron, V.; Halounova, L.
2016-06-01
The Fundamental Base of Geographic Data of the Czech Republic (hereinafter FBGD) is a national 2D geodatabase at a 1:10,000 scale with more than 100 geographic objects. This paper describes the design of the permanent updating mechanism of buildings in FBGD. The proposed procedure belongs to the category of hybrid change detection (HCD) techniques which combine pixel-based and object-based evaluation. The main sources of information for HCD are cadastral information and bi-temporal vertical digital aerial photographs. These photographs have great information potential because they contain multispectral, position and also elevation information. Elevation information represents a digital surface model (DSM) which can be obtained using the image matching technique. Pixel-based evaluation of bi-temporal DSMs enables fast localization of places with potential building changes. These coarse results are subsequently classified through the object-based image analysis (OBIA) using spectral, textural and contextual features and GIS tools. The advantage of the two-stage evaluation is the pre-selection of locations where image segmentation (a computationally demanding part of OBIA) is performed. It is not necessary to apply image segmentation to the entire scene, but only to the surroundings of detected changes, which contributes to significantly faster processing and lower hardware requirements. The created technology is based on open-source software solutions that allow easy portability on multiple computers and parallelization of processing. This leads to significant savings of financial resources which can be expended on the further development of FBGD.
Aslami, Farnoosh; Ghorbani, Ardavan
2018-06-03
In this study, land-use/land-cover (LULC) change in the Ardabil, Namin, and Nir counties, in the Ardabil province in the northwest of Iran, was detected using an object-based method. Landsat images including Thematic Mapper (TM), Landsat Enhanced Thematic Mapper Plus (ETM + ), and Operational Land Imager (OLI) were used. Preprocessing methods, including geometric and radiometric correction, and topographic normalization were performed. Image processing was conducted according to object-based image analysis using the nearest neighbor algorithm. An accuracy assessment was conducted using overall accuracy and Kappa statistics. Results show that maps obtained from images for 1987, 2002, and 2013 had an overall accuracy of 91.76, 91.06, and 93.00%, and a Kappa coefficient of 0.90, 0.83, and 0.91, respectively. Change detection between 1987 and 2013 shows that most of the rangelands (97,156.6 ha) have been converted to dry farming; moreover, residential and other urban land uses have also increased. The largest change in land use has occurred for irrigated farming, rangelands, and dry farming, of which approximately 3539.8, 3086.9, and 2271.9 ha, respectively, have given way to urban land use for each of the studied years.
NASA Astrophysics Data System (ADS)
Matikainen, Leena; Karila, Kirsi; Hyyppä, Juha; Litkey, Paula; Puttonen, Eetu; Ahokas, Eero
2017-06-01
During the last 20 years, airborne laser scanning (ALS), often combined with passive multispectral information from aerial images, has shown its high feasibility for automated mapping processes. The main benefits have been achieved in the mapping of elevated objects such as buildings and trees. Recently, the first multispectral airborne laser scanners have been launched, and active multispectral information is for the first time available for 3D ALS point clouds from a single sensor. This article discusses the potential of this new technology in map updating, especially in automated object-based land cover classification and change detection in a suburban area. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from an object-based random forests analysis suggest that the multispectral ALS data are very useful for land cover classification, considering both elevated classes and ground-level classes. The overall accuracy of the land cover classification results with six classes was 96% compared with validation points. The classes under study included building, tree, asphalt, gravel, rocky area and low vegetation. Compared to classification of single-channel data, the main improvements were achieved for ground-level classes. According to feature importance analyses, multispectral intensity features based on several channels were more useful than those based on one channel. Automatic change detection for buildings and roads was also demonstrated by utilising the new multispectral ALS data in combination with old map vectors. In change detection of buildings, an old digital surface model (DSM) based on single-channel ALS data was also used. Overall, our analyses suggest that the new data have high potential for further increasing the automation level in mapping. Unlike passive aerial imaging commonly used in mapping, the multispectral ALS technology is independent of external illumination conditions, and there are no shadows on intensity images produced from the data. These are significant advantages in developing automated classification and change detection procedures.
The fate of object memory traces under change detection and change blindness.
Busch, Niko A
2013-07-03
Observers often fail to detect substantial changes in a visual scene. This so-called change blindness is often taken as evidence that visual representations are sparse and volatile. This notion rests on the assumption that the failure to detect a change implies that representations of the changing objects are lost all together. However, recent evidence suggests that under change blindness, object memory representations may be formed and stored, but not retrieved. This study investigated the fate of object memory representations when changes go unnoticed. Participants were presented with scenes consisting of real world objects, one of which changed on each trial, while recording event-related potentials (ERPs). Participants were first asked to localize where the change had occurred. In an additional recognition task, participants then discriminated old objects, either from the pre-change or the post-change scene, from entirely new objects. Neural traces of object memories were studied by comparing ERPs for old and novel objects. Participants performed poorly in the detection task and often failed to recognize objects from the scene, especially pre-change objects. However, a robust old/novel effect was observed in the ERP, even when participants were change blind and did not recognize the old object. This implicit memory trace was found both for pre-change and post-change objects. These findings suggest that object memories are stored even under change blindness. Thus, visual representations may not be as sparse and volatile as previously thought. Rather, change blindness may point to a failure to retrieve and use these representations for change detection. Copyright © 2013 Elsevier B.V. All rights reserved.
Urban Change Detection of Pingtan City based on Bi-temporal Remote Sensing Images
NASA Astrophysics Data System (ADS)
Degang, JIANG; Jinyan, XU; Yikang, GAO
2017-02-01
In this paper, a pair of SPOT 5-6 images with the resolution of 0.5m is selected. An object-oriented classification method is used to the two images and five classes of ground features were identified as man-made objects, farmland, forest, waterbody and unutilized land. An auxiliary ASTER GDEM was used to improve the classification accuracy. And the change detection based on the classification results was performed. Accuracy assessment was carried out finally. Consequently, satisfactory results were obtained. The results show that great changes of the Pingtan city have been detected as the expansion of the city area and the intensity increase of man-made buildings, roads and other infrastructures with the establishment of Pingtan comprehensive experimental zone. Wide range of open sea area along the island coast zones has been reclaimed for port and CBDs construction.
Mutual Comparative Filtering for Change Detection in Videos with Unstable Illumination Conditions
NASA Astrophysics Data System (ADS)
Sidyakin, Sergey V.; Vishnyakov, Boris V.; Vizilter, Yuri V.; Roslov, Nikolay I.
2016-06-01
In this paper we propose a new approach for change detection and moving objects detection in videos with unstable, abrupt illumination changes. This approach is based on mutual comparative filters and background normalization. We give the definitions of mutual comparative filters and outline their strong advantage for change detection purposes. Presented approach allows us to deal with changing illumination conditions in a simple and efficient way and does not have drawbacks, which exist in models that assume different color transformation laws. The proposed procedure can be used to improve a number of background modelling methods, which are not specifically designed to work under illumination changes.
NASA Astrophysics Data System (ADS)
Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad
2018-06-01
Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.
Pattern-histogram-based temporal change detection using personal chest radiographs
NASA Astrophysics Data System (ADS)
Ugurlu, Yucel; Obi, Takashi; Hasegawa, Akira; Yamaguchi, Masahiro; Ohyama, Nagaaki
1999-05-01
An accurate and reliable detection of temporal changes from a pair of images has considerable interest in the medical science. Traditional registration and subtraction techniques can be applied to extract temporal differences when,the object is rigid or corresponding points are obvious. However, in radiological imaging, loss of the depth information, the elasticity of object, the absence of clearly defined landmarks and three-dimensional positioning differences constraint the performance of conventional registration techniques. In this paper, we propose a new method in order to detect interval changes accurately without using an image registration technique. The method is based on construction of so-called pattern histogram and comparison procedure. The pattern histogram is a graphic representation of the frequency counts of all allowable patterns in the multi-dimensional pattern vector space. K-means algorithm is employed to partition pattern vector space successively. Any differences in the pattern histograms imply that different patterns are involved in the scenes. In our experiment, a pair of chest radiographs of pneumoconiosis is employed and the changing histogram bins are visualized on both of the images. We found that the method can be used as an alternative way of temporal change detection, particularly when the precise image registration is not available.
An Investigation of Automatic Change Detection for Topographic Map Updating
NASA Astrophysics Data System (ADS)
Duncan, P.; Smit, J.
2012-08-01
Changes to the landscape are constantly occurring and it is essential for geospatial and mapping organisations that these changes are regularly detected and captured, so that map databases can be updated to reflect the current status of the landscape. The Chief Directorate of National Geospatial Information (CD: NGI), South Africa's national mapping agency, currently relies on manual methods of detecting changes and capturing these changes. These manual methods are time consuming and labour intensive, and rely on the skills and interpretation of the operator. It is therefore necessary to move towards more automated methods in the production process at CD: NGI. The aim of this research is to do an investigation into a methodology for automatic or semi-automatic change detection for the purpose of updating topographic databases. The method investigated for detecting changes is through image classification as well as spatial analysis and is focussed on urban landscapes. The major data input into this study is high resolution aerial imagery and existing topographic vector data. Initial results indicate the traditional pixel-based image classification approaches are unsatisfactory for large scale land-use mapping and that object-orientated approaches hold more promise. Even in the instance of object-oriented image classification generalization of techniques on a broad-scale has provided inconsistent results. A solution may lie with a hybrid approach of pixel and object-oriented techniques.
Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study
NASA Astrophysics Data System (ADS)
Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad
2018-01-01
The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.
Zelinsky, G J
2001-02-01
Search, memory, and strategy constraints on change detection were analyzed in terms of oculomotor variables. Observers viewed a repeating sequence of three displays (Scene 1-->Mask-->Scene 2-->Mask...) and indicated the presence-absence of a changing object between Scenes 1 and 2. Scenes depicted real-world objects arranged on a surface. Manipulations included set size (one, three, or nine items) and the orientation of the changing objects (similar or different). Eye movements increased with the number of potentially changing objects in the scene, with this set size effect suggesting a relationship between change detection and search. A preferential fixation analysis determined that memory constraints are better described by the operation comparing the pre- and postchange objects than as a capacity limitation, and a scanpath analysis revealed a change detection strategy relying on the peripheral encoding and comparison of display items. These findings support a signal-in-noise interpretation of change detection in which the signal varies with the similarity of the changing objects and the noise is determined by the distractor objects and scene background.
Object-Based Classification and Change Detection of Hokkaido, Japan
NASA Astrophysics Data System (ADS)
Park, J. G.; Harada, I.; Kwak, Y.
2016-06-01
Topography and geology are factors to characterize the distribution of natural vegetation. Topographic contour is particularly influential on the living conditions of plants such as soil moisture, sunlight, and windiness. Vegetation associations having similar characteristics are present in locations having similar topographic conditions unless natural disturbances such as landslides and forest fires or artificial disturbances such as deforestation and man-made plantation bring about changes in such conditions. We developed a vegetation map of Japan using an object-based segmentation approach with topographic information (elevation, slope, slope direction) that is closely related to the distribution of vegetation. The results found that the object-based classification is more effective to produce a vegetation map than the pixel-based classification.
Caries Detection Methods Based on Changes in Optical Properties between Healthy and Carious Tissue
Karlsson, Lena
2010-01-01
A conservative, noninvasive or minimally invasive approach to clinical management of dental caries requires diagnostic techniques capable of detecting and quantifying lesions at an early stage, when progression can be arrested or reversed. Objective evidence of initiation of the disease can be detected in the form of distinct changes in the optical properties of the affected tooth structure. Caries detection methods based on changes in a specific optical property are collectively referred to as optically based methods. This paper presents a simple overview of the feasibility of three such technologies for quantitative or semiquantitative assessment of caries lesions. Two of the techniques are well-established: quantitative light-induced fluorescence, which is used primarily in caries research, and laser-induced fluorescence, a commercially available method used in clinical dental practice. The third technique, based on near-infrared transillumination of dental enamel is in the developmental stages. PMID:20454579
NASA Astrophysics Data System (ADS)
Kanberoglu, Berkay; Frakes, David
2017-04-01
The extraction of objects from advanced geospatial intelligence (AGI) products based on synthetic aperture radar (SAR) imagery is complicated by a number of factors. For example, accurate detection of temporal changes represented in two-color multiview (2CMV) AGI products can be challenging because of speckle noise susceptibility and false positives that result from small orientation differences between objects imaged at different times. These cases of apparent motion can result in 2CMV detection, but they obviously differ greatly in terms of significance. In investigating the state-of-the-art in SAR image processing, we have found that differentiating between these two general cases is a problem that has not been well addressed. We propose a framework of methods to address these problems. For the detection of the temporal changes while reducing the number of false positives, we propose using adaptive object intensity and area thresholding in conjunction with relaxed brightness optical flow algorithms that track the motion of objects across time in small regions of interest. The proposed framework for distinguishing between actual motion and misregistration can lead to more accurate and meaningful change detection and improve object extraction from a SAR AGI product. Results demonstrate the ability of our techniques to reduce false positives up to 60%.
Nishiyama, Megumi; Kawaguchi, Jun
2014-11-01
To clarify the relationship between visual long-term memory (VLTM) and online visual processing, we investigated whether and how VLTM involuntarily affects the performance of a one-shot change detection task using images consisting of six meaningless geometric objects. In the study phase, participants observed pre-change (Experiment 1), post-change (Experiment 2), or both pre- and post-change (Experiment 3) images appearing in the subsequent change detection phase. In the change detection phase, one object always changed between pre- and post-change images and participants reported which object was changed. Results showed that VLTM of pre-change images enhanced the performance of change detection, while that of post-change images decreased accuracy. Prior exposure to both pre- and post-change images did not influence performance. These results indicate that pre-change information plays an important role in change detection, and that information in VLTM related to the current task does not always have a positive effect on performance. Copyright © 2014 Elsevier Inc. All rights reserved.
Documentation and Detection of Colour Changes of Bas Relieves Using Close Range Photogrammetry
NASA Astrophysics Data System (ADS)
Malinverni, E. S.; Pierdicca, R.; Sturari, M.; Colosi, F.; Orazi, R.
2017-05-01
The digitization of complex buildings, findings or bas relieves can strongly facilitate the work of archaeologists, mainly for in depth analysis tasks. Notwithstanding, whether new visualization techniques ease the study phase, a classical naked-eye approach for determining changes or surface alteration could bring towards several drawbacks. The research work described in these pages is aimed at providing experts with a workflow for the evaluation of alterations (e.g. color decay or surface alterations), allowing a more rapid and objective monitoring of monuments. More in deep, a pipeline of work has been tested in order to evaluate the color variation between surfaces acquired at different époques. The introduction of reliable tools of change detection in the archaeological domain is needful; in fact, the most widespread practice, among archaeologists and practitioners, is to perform a traditional monitoring of surfaces that is made of three main steps: production of a hand-made map based on a subjective analysis, selection of a sub-set of regions of interest, removal of small portion of surface for in depth analysis conducted in laboratory. To overcome this risky and time consuming process, digital automatic change detection procedure represents a turning point. To do so, automatic classification has been carried out according to two approaches: a pixel-based and an object-based method. Pixel-based classification aims to identify the classes by means of the spectral information provided by each pixel belonging to the original bands. The object-based approach operates on sets of pixels (objects/regions) grouped together by means of an image segmentation technique. The methodology was tested by studying the bas-relieves of a temple located in Peru, named Huaca de la Luna. Despite the data sources were collected with unplanned surveys, the workflow proved to be a valuable solution useful to understand which are the main changes over time.
Tsai, Yu Hsin; Stow, Douglas; Weeks, John
2013-01-01
The goal of this study was to map and quantify the number of newly constructed buildings in Accra, Ghana between 2002 and 2010 based on high spatial resolution satellite image data. Two semi-automated feature detection approaches for detecting and mapping newly constructed buildings based on QuickBird very high spatial resolution satellite imagery were analyzed: (1) post-classification comparison; and (2) bi-temporal layerstack classification. Feature Analyst software based on a spatial contextual classifier and ENVI Feature Extraction that uses a true object-based image analysis approach of image segmentation and segment classification were evaluated. Final map products representing new building objects were compared and assessed for accuracy using two object-based accuracy measures, completeness and correctness. The bi-temporal layerstack method generated more accurate results compared to the post-classification comparison method due to less confusion with background objects. The spectral/spatial contextual approach (Feature Analyst) outperformed the true object-based feature delineation approach (ENVI Feature Extraction) due to its ability to more reliably delineate individual buildings of various sizes. Semi-automated, object-based detection followed by manual editing appears to be a reliable and efficient approach for detecting and enumerating new building objects. A bivariate regression analysis was performed using neighborhood-level estimates of new building density regressed on a census-derived measure of socio-economic status, yielding an inverse relationship with R2 = 0.31 (n = 27; p = 0.00). The primary utility of the new building delineation results is to support spatial analyses of land cover and land use and demographic change. PMID:24415810
Rapid Change Detection Algorithm for Disaster Management
NASA Astrophysics Data System (ADS)
Michel, U.; Thunig, H.; Ehlers, M.; Reinartz, P.
2012-07-01
This paper focuses on change detection applications in areas where catastrophic events took place which resulted in rapid destruction especially of manmade objects. Standard methods for automated change detection prove not to be sufficient; therefore a new method was developed and tested. The presented method allows a fast detection and visualization of change in areas of crisis or catastrophes. While often new methods of remote sensing are developed without user oriented aspects, organizations and authorities are not able to use these methods because of absence of remote sensing know how. Therefore a semi-automated procedure was developed. Within a transferable framework, the developed algorithm can be implemented for a set of remote sensing data among different investigation areas. Several case studies are the base for the retrieved results. Within a coarse dividing into statistical parts and the segmentation in meaningful objects, the framework is able to deal with different types of change. By means of an elaborated Temporal Change Index (TCI) only panchromatic datasets are used to extract areas which are destroyed, areas which were not affected and in addition areas where rebuilding has already started.
An attentional bias for LEGO® people using a change detection task: Are LEGO® people animate?
LaPointe, Mitchell R P; Cullen, Rachael; Baltaretu, Bianca; Campos, Melissa; Michalski, Natalie; Sri Satgunarajah, Suja; Cadieux, Michelle L; Pachai, Matthew V; Shore, David I
2016-09-01
Animate objects have been shown to elicit attentional priority in a change detection task. This benefit has been seen for both human and nonhuman animals compared with inanimate objects. One explanation for these results has been based on the importance animate objects have served over the course of our species' history. In the present set of experiments, we present stimuli, which could be perceived as animate, but with which our distant ancestors would have had no experience, and natural selection could have no direct pressure on their prioritization. In the first experiment, we compared LEGO® "people" with LEGO "nonpeople" in a change detection task. In a second experiment, we attempt to control the heterogeneity of the nonanimate objects by using LEGO blocks, matched in size and colour to LEGO people. In the third experiment, we occlude the faces of the LEGO people to control for facial pattern recognition. In the final 2 experiments, we attempt to obscure high-level categorical information processing of the stimuli by inverting and blurring the scenes. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
An improved NSGA - II algorithm for mixed model assembly line balancing
NASA Astrophysics Data System (ADS)
Wu, Yongming; Xu, Yanxia; Luo, Lifei; Zhang, Han; Zhao, Xudong
2018-05-01
Aiming at the problems of assembly line balancing and path optimization for material vehicles in mixed model manufacturing system, a multi-objective mixed model assembly line (MMAL), which is based on optimization objectives, influencing factors and constraints, is established. According to the specific situation, an improved NSGA-II algorithm based on ecological evolution strategy is designed. An environment self-detecting operator, which is used to detect whether the environment changes, is adopted in the algorithm. Finally, the effectiveness of proposed model and algorithm is verified by examples in a concrete mixing system.
NASA Astrophysics Data System (ADS)
Zhang, Caiyun; Smith, Molly; Lv, Jie; Fang, Chaoyang
2017-05-01
Mapping plant communities and documenting their changes is critical to the on-going Florida Everglades restoration project. In this study, a framework was designed to map dominant vegetation communities and inventory their changes in the Florida Everglades Water Conservation Area 2A (WCA-2A) using time series Landsat images spanning 1996-2016. The object-based change analysis technique was combined in the framework. A hybrid pixel/object-based change detection approach was developed to effectively collect training samples for historical images with sparse reference data. An object-based quantification approach was also developed to assess the expansion/reduction of a specific class such as cattail (an invasive species in the Everglades) from the object-based classifications of two dates of imagery. The study confirmed the results in the literature that cattail was largely expanded during 1996-2007. It also revealed that cattail expansion was constrained after 2007. Application of time series Landsat data is valuable to document vegetation changes for the WCA-2A impoundment. The digital techniques developed will benefit global wetland mapping and change analysis in general, and the Florida Everglades WCA-2A in particular.
NASA Astrophysics Data System (ADS)
Kaplan, M. L.; van Cleve, J. E.; Alcock, C.
2003-12-01
Detection and characterization of the small bodies of the outer solar system presents unique challenges to terrestrial based sensing systems, principally the inverse 4th power decrease of reflected and thermal signals with target distance from the Sun. These limits are surpassed by new techniques [1,2,3] employing star-object occultation event sensing, which are capable of detecting sub-kilometer objects in the Kuiper Belt and Oort cloud. This poster will present an instrument and space mission concept based on adaptations of the NASA Discovery Kepler program currently in development at Ball Aerospace and Technologies Corp. Instrument technologies to enable this space science mission are being pursued and will be described. In particular, key attributes of an optimized payload include the ability to provide: 1) Coarse spectral resolution (using an objective spectrometer approach) 2) Wide FOV, simultaneous object monitoring (up to 150,000 stars employing select data regions within a large focal plane mosaic) 3) Fast temporal frame integration and readout architectures (10 to 50 msec for each monitored object) 4) Real-time, intelligent change detection processing (to limit raw data volumes) The Minor Body Surveyor combines the focal plane and processing technology elements into a densely packaged format to support general space mission issues of mass and power consumption, as well as telemetry resources. Mode flexibility is incorporated into the real-time processing elements to allow for either temporal (Occultations) or spatial (Moving targets) change detection. In addition, a basic image capture mode is provided for general pointing and field reference measurements. The overall space mission architecture is described as well. [1] M. E. Bailey. Can 'Invisible' Bodies be Observed in the Solar System. Nature, 259:290-+, January 1976. [2] T. S. Axelrod, C. Alcock, K. H. Cook, and H.-S. Park. A Direct Census of the Oort Cloud with a Robotic Telescope. In ASP Conf. Ser. 34: Robotic Telescopes in the 1990s, pages 171-181, 1992. [3] F. Roques and M. Moncuquet. A Detection Method for Small Kuiper Belt Objects: The Search for Stellar Occultations. Icarus, 147:530-544, October 2000.
Irsik, Vanessa C; Vanden Bosch der Nederlanden, Christina M; Snyder, Joel S
2016-11-01
Attention and other processing constraints limit the perception of objects in complex scenes, which has been studied extensively in the visual sense. We used a change deafness paradigm to examine how attention to particular objects helps and hurts the ability to notice changes within complex auditory scenes. In a counterbalanced design, we examined how cueing attention to particular objects affected performance in an auditory change-detection task through the use of valid or invalid cues and trials without cues (Experiment 1). We further examined how successful encoding predicted change-detection performance using an object-encoding task and we addressed whether performing the object-encoding task along with the change-detection task affected performance overall (Experiment 2). Participants had more error for invalid compared to valid and uncued trials, but this effect was reduced in Experiment 2 compared to Experiment 1. When the object-encoding task was present, listeners who completed the uncued condition first had less overall error than those who completed the cued condition first. All participants showed less change deafness when they successfully encoded change-relevant compared to irrelevant objects during valid and uncued trials. However, only participants who completed the uncued condition first also showed this effect during invalid cue trials, suggesting a broader scope of attention. These findings provide converging evidence that attention to change-relevant objects is crucial for successful detection of acoustic changes and that encouraging broad attention to multiple objects is the best way to reduce change deafness. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Detecting Lateral Motion using Light's Orbital Angular Momentum.
Cvijetic, Neda; Milione, Giovanni; Ip, Ezra; Wang, Ting
2015-10-23
Interrogating an object with a light beam and analyzing the scattered light can reveal kinematic information about the object, which is vital for applications ranging from autonomous vehicles to gesture recognition and virtual reality. We show that by analyzing the change in the orbital angular momentum (OAM) of a tilted light beam eclipsed by a moving object, lateral motion of the object can be detected in an arbitrary direction using a single light beam and without object image reconstruction. We observe OAM spectral asymmetry that corresponds to the lateral motion direction along an arbitrary axis perpendicular to the plane containing the light beam and OAM measurement axes. These findings extend OAM-based remote sensing to detection of non-rotational qualities of objects and may also have extensions to other electromagnetic wave regimes, including radio and sound.
Detecting Lateral Motion using Light’s Orbital Angular Momentum
Cvijetic, Neda; Milione, Giovanni; Ip, Ezra; Wang, Ting
2015-01-01
Interrogating an object with a light beam and analyzing the scattered light can reveal kinematic information about the object, which is vital for applications ranging from autonomous vehicles to gesture recognition and virtual reality. We show that by analyzing the change in the orbital angular momentum (OAM) of a tilted light beam eclipsed by a moving object, lateral motion of the object can be detected in an arbitrary direction using a single light beam and without object image reconstruction. We observe OAM spectral asymmetry that corresponds to the lateral motion direction along an arbitrary axis perpendicular to the plane containing the light beam and OAM measurement axes. These findings extend OAM-based remote sensing to detection of non-rotational qualities of objects and may also have extensions to other electromagnetic wave regimes, including radio and sound. PMID:26493681
A Dual-Process Account of Auditory Change Detection
ERIC Educational Resources Information Center
McAnally, Ken I.; Martin, Russell L.; Eramudugolla, Ranmalee; Stuart, Geoffrey W.; Irvine, Dexter R. F.; Mattingley, Jason B.
2010-01-01
Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed…
The detection of 'virtual' objects using echoes by humans: Spectral cues.
Rowan, Daniel; Papadopoulos, Timos; Archer, Lauren; Goodhew, Amanda; Cozens, Hayley; Lopez, Ricardo Guzman; Edwards, David; Holmes, Hannah; Allen, Robert
2017-07-01
Some blind people use echoes to detect discrete, silent objects to support their spatial orientation/navigation, independence, safety and wellbeing. The acoustical features that people use for this are not well understood. Listening to changes in spectral shape due to the presence of an object could be important for object detection and avoidance, especially at short range, although it is currently not known whether it is possible with echolocation-related sounds. Bands of noise were convolved with recordings of binaural impulse responses of objects in an anechoic chamber to create 'virtual objects', which were analysed and played to sighted and blind listeners inexperienced in echolocation. The sounds were also manipulated to remove cues unrelated to spectral shape. Most listeners could accurately detect hard flat objects using changes in spectral shape. The useful spectral changes for object detection occurred above approximately 3 kHz, as with object localisation. However, energy in the sounds below 3 kHz was required to exploit changes in spectral shape for object detection, whereas energy below 3 kHz impaired object localisation. Further recordings showed that the spectral changes were diminished by room reverberation. While good high-frequency hearing is generally important for echolocation, the optimal echo-generating stimulus will probably depend on the task. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
A service relation model for web-based land cover change detection
NASA Astrophysics Data System (ADS)
Xing, Huaqiao; Chen, Jun; Wu, Hao; Zhang, Jun; Li, Songnian; Liu, Boyu
2017-10-01
Change detection with remotely sensed imagery is a critical step in land cover monitoring and updating. Although a variety of algorithms or models have been developed, none of them can be universal for all cases. The selection of appropriate algorithms and construction of processing workflows depend largely on the expertise of experts about the "algorithm-data" relations among change detection algorithms and the imagery data used. This paper presents a service relation model for land cover change detection by integrating the experts' knowledge about the "algorithm-data" relations into the web-based geo-processing. The "algorithm-data" relations are mapped into a set of web service relations with the analysis of functional and non-functional service semantics. These service relations are further classified into three different levels, i.e., interface, behavior and execution levels. A service relation model is then established using the Object and Relation Diagram (ORD) approach to represent the multi-granularity services and their relations for change detection. A set of semantic matching rules are built and used for deriving on-demand change detection service chains from the service relation model. A web-based prototype system is developed in .NET development environment, which encapsulates nine change detection and pre-processing algorithms and represents their service relations as an ORD. Three test areas from Shandong and Hebei provinces, China with different imagery conditions are selected for online change detection experiments, and the results indicate that on-demand service chains can be generated according to different users' demands.
Robust skin color-based moving object detection for video surveillance
NASA Astrophysics Data System (ADS)
Kaliraj, Kalirajan; Manimaran, Sudha
2016-07-01
Robust skin color-based moving object detection for video surveillance is proposed. The objective of the proposed algorithm is to detect and track the target under complex situations. The proposed framework comprises four stages, which include preprocessing, skin color-based feature detection, feature classification, and target localization and tracking. In the preprocessing stage, the input image frame is smoothed using averaging filter and transformed into YCrCb color space. In skin color detection, skin color regions are detected using Otsu's method of global thresholding. In the feature classification, histograms of both skin and nonskin regions are constructed and the features are classified into foregrounds and backgrounds based on Bayesian skin color classifier. The foreground skin regions are localized by a connected component labeling process. Finally, the localized foreground skin regions are confirmed as a target by verifying the region properties, and nontarget regions are rejected using the Euler method. At last, the target is tracked by enclosing the bounding box around the target region in all video frames. The experiment was conducted on various publicly available data sets and the performance was evaluated with baseline methods. It evidently shows that the proposed algorithm works well against slowly varying illumination, target rotations, scaling, fast, and abrupt motion changes.
Volumetric Security Alarm Based on a Spherical Ultrasonic Transducer Array
NASA Astrophysics Data System (ADS)
Sayin, Umut; Scaini, Davide; Arteaga, Daniel
Most of the existent alarm systems depend on physical or visual contact. The detection area is often limited depending on the type of the transducer, creating blind spots. Our proposition is a truly volumetric alarm system that can detect any movement in the intrusion area, based on monitoring the change over time of the impulse response of the room, which acts as an acoustic footprint. The device depends on an omnidirectional ultrasonic transducer array emitting sweep signals to calculate the impulse response in short intervals. Any change in the room conditions is monitored through a correlation function. The sensitivity of the alarm to different objects and different environments depends on the sweep duration, sweep bandwidth, and sweep interval. Successful detection of intrusions also depends on the size of the monitoring area and requires an adjustment of emitted ultrasound power. Strong air flow affects the performance of the alarm. A method for separating moving objects from strong air flow is devised using an adaptive thresholding on the correlation function involving a series of impulse response measurements. The alarm system can be also used for fire detection since air flow sourced from heating objects differ from random nature of the present air flow. Several measurements are made to test the integrity of the alarm in rooms sizing from 834-2080m3 with irregular geometries and various objects. The proposed system can efficiently detect intrusion whilst adequate emitting power is provided.
The Benefit of Surface Uniformity for Encoding Boundary Features in Visual Working Memory
ERIC Educational Resources Information Center
Kim, Sung-Ho; Kim, Jung-Oh
2011-01-01
Using a change detection paradigm, the present study examined an object-based encoding benefit in visual working memory (VWM) for two boundary features (two orientations in Experiments 1-2 and two shapes in Experiments 3-4) assigned to a single object. Participants remembered more boundary features when they were conjoined into a single object of…
NASA Astrophysics Data System (ADS)
Li, S.; Zhang, S.; Yang, D.
2017-09-01
Remote sensing images are particularly well suited for analysis of land cover change. In this paper, we present a new framework for detection of changing land cover using satellite imagery. Morphological features and a multi-index are used to extract typical objects from the imagery, including vegetation, water, bare land, buildings, and roads. Our method, based on connected domains, is different from traditional methods; it uses image segmentation to extract morphological features, while the enhanced vegetation index (EVI), the differential water index (NDWI) are used to extract vegetation and water, and a fragmentation index is used to the correct extraction results of water. HSV transformation and threshold segmentation extract and remove the effects of shadows on extraction results. Change detection is performed on these results. One of the advantages of the proposed framework is that semantic information is extracted automatically using low-level morphological features and indexes. Another advantage is that the proposed method detects specific types of change without any training samples. A test on ZY-3 images demonstrates that our framework has a promising capability to detect change.
Woodman, Geoffrey F.; Vogel, Edward K.; Luck, Steven J.
2012-01-01
Many recent studies of visual working memory have used change-detection tasks in which subjects view sequential displays and are asked to report whether they are identical or if one object has changed. A key question is whether the memory system used to perform this task is sufficiently flexible to detect changes in object identity independent of spatial transformations, but previous research has yielded contradictory results. To address this issue, the present study compared standard change-detection tasks with tasks in which the objects varied in size or position between successive arrays. Performance was nearly identical across the standard and transformed tasks unless the task implicitly encouraged spatial encoding. These results resolve the discrepancies in prior studies and demonstrate that the visual working memory system can detect changes in object identity across spatial transformations. PMID:22287933
Cyber-Physical Attacks With Control Objectives
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-08-18
This study studies attackers with control objectives against cyber-physical systems (CPSs). The goal of the attacker is to counteract the CPS's controller and move the system to a target state while evading detection. We formulate a cost function that reflects the attacker's goals, and, using dynamic programming, we show that the optimal attack strategy reduces to a linear feedback of the attacker's state estimate. By changing the parameters of the cost function, we show how an attacker can design optimal attacks to balance the control objective and the detection avoidance objective. In conclusion, we provide a numerical illustration based onmore » a remotely controlled helicopter under attack.« less
Cyber-Physical Attacks With Control Objectives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
This study studies attackers with control objectives against cyber-physical systems (CPSs). The goal of the attacker is to counteract the CPS's controller and move the system to a target state while evading detection. We formulate a cost function that reflects the attacker's goals, and, using dynamic programming, we show that the optimal attack strategy reduces to a linear feedback of the attacker's state estimate. By changing the parameters of the cost function, we show how an attacker can design optimal attacks to balance the control objective and the detection avoidance objective. In conclusion, we provide a numerical illustration based onmore » a remotely controlled helicopter under attack.« less
NASA Astrophysics Data System (ADS)
Engström, Philip; Larsson, Hâkan; Letalick, Dietmar
2014-05-01
An improvised explosive device (IED) is a bomb constructed and deployed in a non-standard manor. Improvised means that the bomb maker took whatever he could get his hands on, making it very hard to predict and detect. Nevertheless, the matters in which the IED's are deployed and used, for example as roadside bombs, follow certain patterns. One possible approach for early warning is to record the surroundings when it is safe and use this as reference data for change detection. In this paper a LADAR-based system for IED detection is presented. The idea is to measure the area in front of the vehicle when driving and comparing this to the previously recorded reference data. By detecting new, missing or changed objects the system can make the driver aware of probable threats.
NASA Astrophysics Data System (ADS)
Tehsin, Sara; Rehman, Saad; Awan, Ahmad B.; Chaudry, Qaiser; Abbas, Muhammad; Young, Rupert; Asif, Afia
2016-04-01
Sensitivity to the variations in the reference image is a major concern when recognizing target objects. A combinational framework of correlation filters and logarithmic transformation has been previously reported to resolve this issue alongside catering for scale and rotation changes of the object in the presence of distortion and noise. In this paper, we have extended the work to include the influence of different logarithmic bases on the resultant correlation plane. The meaningful changes in correlation parameters along with contraction/expansion in the correlation plane peak have been identified under different scenarios. Based on our research, we propose some specific log bases to be used in logarithmically transformed correlation filters for achieving suitable tolerance to different variations. The study is based upon testing a range of logarithmic bases for different situations and finding an optimal logarithmic base for each particular set of distortions. Our results show improved correlation and target detection accuracies.
Beanland, Vanessa; Filtness, Ashleigh J; Jeans, Rhiannon
2017-03-01
The ability to detect changes is crucial for safe driving. Previous research has demonstrated that drivers often experience change blindness, which refers to failed or delayed change detection. The current study explored how susceptibility to change blindness varies as a function of the driving environment, type of object changed, and safety relevance of the change. Twenty-six fully-licenced drivers completed a driving-related change detection task. Changes occurred to seven target objects (road signs, cars, motorcycles, traffic lights, pedestrians, animals, or roadside trees) across two environments (urban or rural). The contextual safety relevance of the change was systematically manipulated within each object category, ranging from high safety relevance (i.e., requiring a response by the driver) to low safety relevance (i.e., requiring no response). When viewing rural scenes, compared with urban scenes, participants were significantly faster and more accurate at detecting changes, and were less susceptible to "looked-but-failed-to-see" errors. Interestingly, safety relevance of the change differentially affected performance in urban and rural environments. In urban scenes, participants were more efficient at detecting changes with higher safety relevance, whereas in rural scenes the effect of safety relevance has marginal to no effect on change detection. Finally, even after accounting for safety relevance, change blindness varied significantly between target types. Overall the results suggest that drivers are less susceptible to change blindness for objects that are likely to change or move (e.g., traffic lights vs. road signs), and for moving objects that pose greater danger (e.g., wild animals vs. pedestrians). Copyright © 2017 Elsevier Ltd. All rights reserved.
The Objective Identification and Quantification of Interstitial Lung Abnormalities in Smokers.
Ash, Samuel Y; Harmouche, Rola; Ross, James C; Diaz, Alejandro A; Hunninghake, Gary M; Putman, Rachel K; Onieva, Jorge; Martinez, Fernando J; Choi, Augustine M; Lynch, David A; Hatabu, Hiroto; Rosas, Ivan O; Estepar, Raul San Jose; Washko, George R
2017-08-01
Previous investigation suggests that visually detected interstitial changes in the lung parenchyma of smokers are highly clinically relevant and predict outcomes, including death. Visual subjective analysis to detect these changes is time-consuming, insensitive to subtle changes, and requires training to enhance reproducibility. Objective detection of such changes could provide a method of disease identification without these limitations. The goal of this study was to develop and test a fully automated image processing tool to objectively identify radiographic features associated with interstitial abnormalities in the computed tomography scans of a large cohort of smokers. An automated tool that uses local histogram analysis combined with distance from the pleural surface was used to detect radiographic features consistent with interstitial lung abnormalities in computed tomography scans from 2257 individuals from the Genetic Epidemiology of COPD study, a longitudinal observational study of smokers. The sensitivity and specificity of this tool was determined based on its ability to detect the visually identified presence of these abnormalities. The tool had a sensitivity of 87.8% and a specificity of 57.5% for the detection of interstitial lung abnormalities, with a c-statistic of 0.82, and was 100% sensitive and 56.7% specific for the detection of the visual subtype of interstitial abnormalities called fibrotic parenchymal abnormalities, with a c-statistic of 0.89. In smokers, a fully automated image processing tool is able to identify those individuals who have interstitial lung abnormalities with moderate sensitivity and specificity. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Space Object Maneuver Detection Algorithms Using TLE Data
NASA Astrophysics Data System (ADS)
Pittelkau, M.
2016-09-01
An important aspect of Space Situational Awareness (SSA) is detection of deliberate and accidental orbit changes of space objects. Although space surveillance systems detect orbit maneuvers within their tracking algorithms, maneuver data are not readily disseminated for general use. However, two-line element (TLE) data is available and can be used to detect maneuvers of space objects. This work is an attempt to improve upon existing TLE-based maneuver detection algorithms. Three adaptive maneuver detection algorithms are developed and evaluated: The first is a fading-memory Kalman filter, which is equivalent to the sliding-window least-squares polynomial fit, but computationally more efficient and adaptive to the noise in the TLE data. The second algorithm is based on a sample cumulative distribution function (CDF) computed from a histogram of the magnitude-squared |V|2 of change-in-velocity vectors (V), which is computed from the TLE data. A maneuver detection threshold is computed from the median estimated from the CDF, or from the CDF and a specified probability of false alarm. The third algorithm is a median filter. The median filter is the simplest of a class of nonlinear filters called order statistics filters, which is within the theory of robust statistics. The output of the median filter is practically insensitive to outliers, or large maneuvers. The median of the |V|2 data is proportional to the variance of the V, so the variance is estimated from the output of the median filter. A maneuver is detected when the input data exceeds a constant times the estimated variance.
Guidance of Attention to Objects and Locations by Long-Term Memory of Natural Scenes
ERIC Educational Resources Information Center
Becker, Mark W.; Rasmussen, Ian P.
2008-01-01
Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants…
A novel method to detect shadows on multispectral images
NASA Astrophysics Data System (ADS)
Daǧlayan Sevim, Hazan; Yardımcı ćetin, Yasemin; Özışık Başkurt, Didem
2016-10-01
Shadowing occurs when the direct light coming from a light source is obstructed by high human made structures, mountains or clouds. Since shadow regions are illuminated only by scattered light, true spectral properties of the objects are not observed in such regions. Therefore, many object classification and change detection problems utilize shadow detection as a preprocessing step. Besides, shadows are useful for obtaining 3D information of the objects such as estimating the height of buildings. With pervasiveness of remote sensing images, shadow detection is ever more important. This study aims to develop a shadow detection method on multispectral images based on the transformation of C1C2C3 space and contribution of NIR bands. The proposed method is tested on Worldview-2 images covering Ankara, Turkey at different times. The new index is used on these 8-band multispectral images with two NIR bands. The method is compared with methods in the literature.
Change blindness and visual memory: visual representations get rich and act poor.
Varakin, D Alexander; Levin, Daniel T
2006-02-01
Change blindness is often taken as evidence that visual representations are impoverished, while successful recognition of specific objects is taken as evidence that they are richly detailed. In the current experiments, participants performed cover tasks that required each object in a display to be attended. Change detection trials were unexpectedly introduced and surprise recognition tests were given for nonchanging displays. For both change detection and recognition, participants had to distinguish objects from the same basic-level category, making it likely that specific visual information had to be used for successful performance. Although recognition was above chance, incidental change detection usually remained at floor. These results help reconcile demonstrations of poor change detection with demonstrations of good memory because they suggest that the capability to store visual information in memory is not reflected by the visual system's tendency to utilize these representations for purposes of detecting unexpected changes.
Object detection in natural scenes: Independent effects of spatial and category-based attention.
Stein, Timo; Peelen, Marius V
2017-04-01
Humans are remarkably efficient in detecting highly familiar object categories in natural scenes, with evidence suggesting that such object detection can be performed in the (near) absence of attention. Here we systematically explored the influences of both spatial attention and category-based attention on the accuracy of object detection in natural scenes. Manipulating both types of attention additionally allowed for addressing how these factors interact: whether the requirement for spatial attention depends on the extent to which observers are prepared to detect a specific object category-that is, on category-based attention. The results showed that the detection of targets from one category (animals or vehicles) was better than the detection of targets from two categories (animals and vehicles), demonstrating the beneficial effect of category-based attention. This effect did not depend on the semantic congruency of the target object and the background scene, indicating that observers attended to visual features diagnostic of the foreground target objects from the cued category. Importantly, in three experiments the detection of objects in scenes presented in the periphery was significantly impaired when observers simultaneously performed an attentionally demanding task at fixation, showing that spatial attention affects natural scene perception. In all experiments, the effects of category-based attention and spatial attention on object detection performance were additive rather than interactive. Finally, neither spatial nor category-based attention influenced metacognitive ability for object detection performance. These findings demonstrate that efficient object detection in natural scenes is independently facilitated by spatial and category-based attention.
Image denoising based on noise detection
NASA Astrophysics Data System (ADS)
Jiang, Yuanxiang; Yuan, Rui; Sun, Yuqiu; Tian, Jinwen
2018-03-01
Because of the noise points in the images, any operation of denoising would change the original information of non-noise pixel. A noise detection algorithm based on fractional calculus was proposed to denoise in this paper. Convolution of the image was made to gain direction gradient masks firstly. Then, the mean gray was calculated to obtain the gradient detection maps. Logical product was made to acquire noise position image next. Comparisons in the visual effect and evaluation parameters after processing, the results of experiment showed that the denoising algorithms based on noise were better than that of traditional methods in both subjective and objective aspects.
A survey on object detection in optical remote sensing images
NASA Astrophysics Data System (ADS)
Cheng, Gong; Han, Junwei
2016-07-01
Object detection in optical remote sensing images, being a fundamental but challenging problem in the field of aerial and satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. While enormous methods exist, a deep review of the literature concerning generic object detection is still lacking. This paper aims to provide a review of the recent progress in this field. Different from several previously published surveys that focus on a specific object class such as building and road, we concentrate on more generic object categories including, but are not limited to, road, building, tree, vehicle, ship, airport, urban-area. Covering about 270 publications we survey (1) template matching-based object detection methods, (2) knowledge-based object detection methods, (3) object-based image analysis (OBIA)-based object detection methods, (4) machine learning-based object detection methods, and (5) five publicly available datasets and three standard evaluation metrics. We also discuss the challenges of current studies and propose two promising research directions, namely deep learning-based feature representation and weakly supervised learning-based geospatial object detection. It is our hope that this survey will be beneficial for the researchers to have better understanding of this research field.
Salient object detection method based on multiple semantic features
NASA Astrophysics Data System (ADS)
Wang, Chunyang; Yu, Chunyan; Song, Meiping; Wang, Yulei
2018-04-01
The existing salient object detection model can only detect the approximate location of salient object, or highlight the background, to resolve the above problem, a salient object detection method was proposed based on image semantic features. First of all, three novel salient features were presented in this paper, including object edge density feature (EF), object semantic feature based on the convex hull (CF) and object lightness contrast feature (LF). Secondly, the multiple salient features were trained with random detection windows. Thirdly, Naive Bayesian model was used for combine these features for salient detection. The results on public datasets showed that our method performed well, the location of salient object can be fixed and the salient object can be accurately detected and marked by the specific window.
A New 3D Object Pose Detection Method Using LIDAR Shape Set
Kim, Jung-Un
2018-01-01
In object detection systems for autonomous driving, LIDAR sensors provide very useful information. However, problems occur because the object representation is greatly distorted by changes in distance. To solve this problem, we propose a LIDAR shape set that reconstructs the shape surrounding the object more clearly by using the LIDAR point information projected on the object. The LIDAR shape set restores object shape edges from a bird’s eye view by filtering LIDAR points projected on a 2D pixel-based front view. In this study, we use this shape set for two purposes. The first is to supplement the shape set with a LIDAR Feature map, and the second is to divide the entire shape set according to the gradient of the depth and density to create a 2D and 3D bounding box proposal for each object. We present a multimodal fusion framework that classifies objects and restores the 3D pose of each object using enhanced feature maps and shape-based proposals. The network structure consists of a VGG -based object classifier that receives multiple inputs and a LIDAR-based Region Proposal Networks (RPN) that identifies object poses. It works in a very intuitive and efficient manner and can be extended to other classes other than vehicles. Our research has outperformed object classification accuracy (Average Precision, AP) and 3D pose restoration accuracy (3D bounding box recall rate) based on the latest studies conducted with KITTI data sets. PMID:29547551
A New 3D Object Pose Detection Method Using LIDAR Shape Set.
Kim, Jung-Un; Kang, Hang-Bong
2018-03-16
In object detection systems for autonomous driving, LIDAR sensors provide very useful information. However, problems occur because the object representation is greatly distorted by changes in distance. To solve this problem, we propose a LIDAR shape set that reconstructs the shape surrounding the object more clearly by using the LIDAR point information projected on the object. The LIDAR shape set restores object shape edges from a bird's eye view by filtering LIDAR points projected on a 2D pixel-based front view. In this study, we use this shape set for two purposes. The first is to supplement the shape set with a LIDAR Feature map, and the second is to divide the entire shape set according to the gradient of the depth and density to create a 2D and 3D bounding box proposal for each object. We present a multimodal fusion framework that classifies objects and restores the 3D pose of each object using enhanced feature maps and shape-based proposals. The network structure consists of a VGG -based object classifier that receives multiple inputs and a LIDAR-based Region Proposal Networks (RPN) that identifies object poses. It works in a very intuitive and efficient manner and can be extended to other classes other than vehicles. Our research has outperformed object classification accuracy (Average Precision, AP) and 3D pose restoration accuracy (3D bounding box recall rate) based on the latest studies conducted with KITTI data sets.
Evaluation of experimental UAV video change detection
NASA Astrophysics Data System (ADS)
Bartelsen, J.; Saur, G.; Teutsch, C.
2016-10-01
During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kruger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect to the flight path, viewpoint change and parametrization. Hence, based on synthetic "before" and "after" videos of a simulated scene, we estimated the precision and recall of automatically detected changes. In addition and based on our approach, we illustrate the results showing the change detection in short, but real video sequences. Future work will improve the photogrammetric approach for frame registration, and extensive real video material, capable of change detection, will be acquired.
NASA Astrophysics Data System (ADS)
El-Abbas, Mustafa M.; Csaplovics, Elmar; Deafalla, Taisser H.
2013-10-01
Nowadays, remote-sensing technologies are becoming increasingly interlinked to the issue of deforestation. They offer a systematized and objective strategy to document, understand and simulate the deforestation process and its associated causes. In this context, the main goal of this study, conducted in the Blue Nile region of Sudan, in which most of the natural habitats were dramatically destroyed, was to develop spatial methodologies to assess the deforestation dynamics and its associated factors. To achieve that, optical multispectral satellite scenes (i.e., ASTER and LANDSAT) integrated with field survey in addition to multiple data sources were used for the analyses. Spatiotemporal Object Based Image Analysis (STOBIA) was applied to assess the change dynamics within the period of study. Broadly, the above mentioned analyses include; Object Based (OB) classifications, post-classification change detection, data fusion, information extraction and spatial analysis. Hierarchical multi-scale segmentation thresholds were applied and each class was delimited with semantic meanings by a set of rules associated with membership functions. Consequently, the fused multi-temporal data were introduced to create detailed objects of change classes from the input LU/LC classes. The dynamic changes were quantified and spatially located as well as the spatial and contextual relations from adjacent areas were analyzed. The main finding of the present study is that, the forest areas were drastically decreased, while the agrarian structure in conversion of forest into agricultural fields and grassland was the main force of deforestation. In contrast, the capability of the area to recover was clearly observed. The study concludes with a brief assessment of an 'oriented' framework, focused on the alarming areas where serious dynamics are located and where urgent plans and interventions are most critical, guided with potential solutions based on the identified driving forces.
van Lamsweerde, Amanda E; Beck, Melissa R; Elliott, Emily M
2015-02-01
The ability to remember feature bindings is an important measure of the ability to maintain objects in working memory (WM). In this study, we investigated whether both object- and feature-based representations are maintained in WM. Specifically, we tested the hypotheses that retaining a greater number of feature representations (i.e., both as individual features and bound representations) results in a more robust representation of individual features than of feature bindings, and that retrieving information from long-term memory (LTM) into WM would cause a greater disruption to feature bindings. In four experiments, we examined the effects of retrieving a word from LTM on shape and color-shape binding change detection performance. We found that binding changes were more difficult to detect than individual-feature changes overall, but that the cost of retrieving a word from LTM was the same for both individual-feature and binding changes.
NASA Astrophysics Data System (ADS)
Ban, Yifang; Gong, Peng; Gamba, Paolo; Taubenbock, Hannes; Du, Peijun
2016-08-01
The overall objective of this research is to investigate multi-temporal, multi-scale, multi-sensor satellite data for analysis of urbanization and environmental/climate impact in China to support sustainable planning. Multi- temporal multi-scale SAR and optical data have been evaluated for urban information extraction using innovative methods and algorithms, including KTH- Pavia Urban Extractor, Pavia UEXT, and an "exclusion- inclusion" framework for urban extent extraction, and KTH-SEG, a novel object-based classification method for detailed urban land cover mapping. Various pixel- based and object-based change detection algorithms were also developed to extract urban changes. Several Chinese cities including Beijing, Shanghai and Guangzhou are selected as study areas. Spatio-temporal urbanization patterns and environmental impact at regional, metropolitan and city core were evaluated through ecosystem service, landscape metrics, spatial indices, and/or their combinations. The relationship between land surface temperature and land-cover classes was also analyzed.The urban extraction results showed that urban areas and small towns could be well extracted using multitemporal SAR data with the KTH-Pavia Urban Extractor and UEXT. The fusion of SAR data at multiple scales from multiple sensors was proven to improve urban extraction. For urban land cover mapping, the results show that the fusion of multitemporal SAR and optical data could produce detailed land cover maps with improved accuracy than that of SAR or optical data alone. Pixel-based and object-based change detection algorithms developed with the project were effective to extract urban changes. Comparing the urban land cover results from mulitemporal multisensor data, the environmental impact analysis indicates major losses for food supply, noise reduction, runoff mitigation, waste treatment and global climate regulation services through landscape structural changes in terms of decreases in service area, edge contamination and fragmentation. In terms ofclimate impact, the results indicate that land surface temperature can be related to land use/land cover classes.
NASA Astrophysics Data System (ADS)
Ye, Su; Chen, Dongmei; Yu, Jie
2016-04-01
In remote sensing, conventional supervised change-detection methods usually require effective training data for multiple change types. This paper introduces a more flexible and efficient procedure that seeks to identify only the changes that users are interested in, here after referred to as "targeted change detection". Based on a one-class classifier "Support Vector Domain Description (SVDD)", a novel algorithm named "Three-layer SVDD Fusion (TLSF)" is developed specially for targeted change detection. The proposed algorithm combines one-class classification generated from change vector maps, as well as before- and after-change images in order to get a more reliable detecting result. In addition, this paper introduces a detailed workflow for implementing this algorithm. This workflow has been applied to two case studies with different practical monitoring objectives: urban expansion and forest fire assessment. The experiment results of these two case studies show that the overall accuracy of our proposed algorithm is superior (Kappa statistics are 86.3% and 87.8% for Case 1 and 2, respectively), compared to applying SVDD to change vector analysis and post-classification comparison.
[The role of sustained attention in shift-contingent change blindness].
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2015-02-01
Previous studies of change blindness have examined the effect of temporal factors (e.g., blank duration) on attention in change detection. This study examined the effect of spatial factors (i.e., whether the locations of original and changed objects are the same or different) on attention in change detection, using a shift-contingent change blindness task. We used a flicker paradigm in which the location of a to-be-judged target image was manipulated (shift, no-shift). In shift conditions, the image of an array of objects was spatially shifted so that all objects appeared in new locations; in no-shift conditions, all object images of an array appeared at the same location. The presence of visual stimuli (dots) in the blank display between the two images was.manipulated (dot, no-dot) under the assumption that abrupt onsets of these stimuli would capture attention. Results indicated that change detection performance was improved by exogenous attentional capture in the shift condition. Thus, we suggest that attention can play an important role in change detection during shift-contingent change blindness.
NASA Astrophysics Data System (ADS)
Zingoni, Andrea; Diani, Marco; Corsini, Giovanni
2016-10-01
We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.
Yang, Limin; Xian, George Z.; Klaver, Jacqueline M.; Deal, Brian
2003-01-01
We developed a Sub-pixel Imperviousness Change Detection (SICD) approach to detect urban land-cover changes using Landsat and high-resolution imagery. The sub-pixel percent imperviousness was mapped for two dates (09 March 1993 and 11 March 2001) over western Georgia using a regression tree algorithm. The accuracy of the predicted imperviousness was reasonable based on a comparison using independent reference data. The average absolute error between predicted and reference data was 16.4 percent for 1993 and 15.3 percent for 2001. The correlation coefficient (r) was 0.73 for 1993 and 0.78 for 2001, respectively. Areas with a significant increase (greater than 20 percent) in impervious surface from 1993 to 2001 were mostly related to known land-cover/land-use changes that occurred in this area, suggesting that the spatial change of an impervious surface is a useful indicator for identifying spatial extent, intensity, and, potentially, type of urban land-cover/land-use changes. Compared to other pixel-based change-detection methods (band differencing, rationing, change vector, post-classification), information on changes in sub-pixel percent imperviousness allow users to quantify and interpret urban land-cover/land-use changes based on their own definition. Such information is considered complementary to products generated using other change-detection methods. In addition, the procedure for mapping imperviousness is objective and repeatable, hence, can be used for monitoring urban land-cover/land-use change over a large geographic area. Potential applications and limitations of the products developed through this study in urban environmental studies are also discussed.
Aerial surveillance based on hierarchical object classification for ground target detection
NASA Astrophysics Data System (ADS)
Vázquez-Cervantes, Alberto; García-Huerta, Juan-Manuel; Hernández-Díaz, Teresa; Soto-Cajiga, J. A.; Jiménez-Hernández, Hugo
2015-03-01
Unmanned aerial vehicles have turned important in surveillance application due to the flexibility and ability to inspect and displace in different regions of interest. The instrumentation and autonomy of these vehicles have been increased; i.e. the camera sensor is now integrated. Mounted cameras allow flexibility to monitor several regions of interest, displacing and changing the camera view. A well common task performed by this kind of vehicles correspond to object localization and tracking. This work presents a hierarchical novel algorithm to detect and locate objects. The algorithm is based on a detection-by-example approach; this is, the target evidence is provided at the beginning of the vehicle's route. Afterwards, the vehicle inspects the scenario, detecting all similar objects through UTM-GPS coordinate references. Detection process consists on a sampling information process of the target object. Sampling process encode in a hierarchical tree with different sampling's densities. Coding space correspond to a huge binary space dimension. Properties such as independence and associative operators are defined in this space to construct a relation between the target object and a set of selected features. Different densities of sampling are used to discriminate from general to particular features that correspond to the target. The hierarchy is used as a way to adapt the complexity of the algorithm due to optimized battery duty cycle of the aerial device. Finally, this approach is tested in several outdoors scenarios, proving that the hierarchical algorithm works efficiently under several conditions.
Tracking Object Existence From an Autonomous Patrol Vehicle
NASA Technical Reports Server (NTRS)
Wolf, Michael; Scharenbroich, Lucas
2011-01-01
An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the uncertainty arising from errors in sensors and upstream processes. However, traditional target tracking methods typically assume a stationary detection volume of interest, whereas in this case, one must make adjustments for being able to see only a small portion of the region of interest and understand when an alert situation has occurred. To track object existence inside and outside the vehicle's sensor range, a probability of existence was defined for each hypothesized object, and this value was updated at every time step in a Bayesian manner based on expected characteristics of the sensor and object and whether that object has been detected in the most recent time step. Then, this value feeds into a sequential probability ratio test (SPRT) to determine the status of the object (suspected, confirmed, or deleted). Alerts are sent upon selected status transitions. Additionally, in order to track objects that move in and out of sensor range and update the probability of existence appropriately a variable probability detection has been defined and the hypothesis probability equations have been re-derived to accommodate this change. Unsupervised object tracking is a pervasive issue in automated perception systems. This work could apply to any mobile platform (ground vehicle, sea vessel, air vehicle, or orbiter) that intermittently revisits regions of interest and needs to determine whether anything interesting has changed.
Building Change Detection from Bi-Temporal Dense-Matching Point Clouds and Aerial Images.
Pang, Shiyan; Hu, Xiangyun; Cai, Zhongliang; Gong, Jinqi; Zhang, Mi
2018-03-24
In this work, a novel building change detection method from bi-temporal dense-matching point clouds and aerial images is proposed to address two major problems, namely, the robust acquisition of the changed objects above ground and the automatic classification of changed objects into buildings or non-buildings. For the acquisition of changed objects above ground, the change detection problem is converted into a binary classification, in which the changed area above ground is regarded as the foreground and the other area as the background. For the gridded points of each period, the graph cuts algorithm is adopted to classify the points into foreground and background, followed by the region-growing algorithm to form candidate changed building objects. A novel structural feature that was extracted from aerial images is constructed to classify the candidate changed building objects into buildings and non-buildings. The changed building objects are further classified as "newly built", "taller", "demolished", and "lower" by combining the classification and the digital surface models of two periods. Finally, three typical areas from a large dataset are used to validate the proposed method. Numerous experiments demonstrate the effectiveness of the proposed algorithm.
A Motion Detection Algorithm Using Local Phase Information
Lazar, Aurel A.; Ukani, Nikul H.; Zhou, Yiyin
2016-01-01
Previous research demonstrated that global phase alone can be used to faithfully represent visual scenes. Here we provide a reconstruction algorithm by using only local phase information. We also demonstrate that local phase alone can be effectively used to detect local motion. The local phase-based motion detector is akin to models employed to detect motion in biological vision, for example, the Reichardt detector. The local phase-based motion detection algorithm introduced here consists of two building blocks. The first building block measures/evaluates the temporal change of the local phase. The temporal derivative of the local phase is shown to exhibit the structure of a second order Volterra kernel with two normalized inputs. We provide an efficient, FFT-based algorithm for implementing the change of the local phase. The second processing building block implements the detector; it compares the maximum of the Radon transform of the local phase derivative with a chosen threshold. We demonstrate examples of applying the local phase-based motion detection algorithm on several video sequences. We also show how the locally detected motion can be used for segmenting moving objects in video scenes and compare our local phase-based algorithm to segmentation achieved with a widely used optic flow algorithm. PMID:26880882
NASA Astrophysics Data System (ADS)
Straub, Jeremy
2017-06-01
Prior work by Zeltmann, et al. has demonstrated the impact of small defects and other irregularities on the structural integrity of 3D printed objects. It posited that such defects could be introduced intentionally. The current work looks at the impact of changing the fill level on object structural integrity. It considers whether the existence of an appropriate level of fill can be determined through visible light imagery-based assessment of a 3D printed object. A technique for assessing the quality and sufficiency of quantity of 3D printed fill material is presented. It is assessed experimentally and results are presented and analyzed.
Guidance of attention to objects and locations by long-term memory of natural scenes.
Becker, Mark W; Rasmussen, Ian P
2008-11-01
Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered.
Liu, Yanjie; Han, Haijun; Liu, Tao; Yi, Jingang; Li, Qingguo; Inoue, Yoshio
2016-01-01
Real-time detection of contact states, such as stick-slip interaction between a robot and an object on its end effector, is crucial for the robot to grasp and manipulate the object steadily. This paper presents a novel tactile sensor based on electromagnetic induction and its application on stick-slip interaction. An equivalent cantilever-beam model of the tactile sensor was built and capable of constructing the relationship between the sensor output and the friction applied on the sensor. With the tactile sensor, a new method to detect stick-slip interaction on the contact surface between the object and the sensor is proposed based on the characteristics of friction change. Furthermore, a prototype was developed for a typical application, stable wafer transferring on a wafer transfer robot, by considering the spatial magnetic field distribution and the sensor size according to the requirements of wafer transfer. The experimental results validate the sensing mechanism of the tactile sensor and verify its feasibility of detecting stick-slip on the contact surface between the wafer and the sensor. The sensing mechanism also provides a new approach to detect the contact state on the soft-rigid surface in other robot-environment interaction systems. PMID:27023545
Object tracking algorithm based on the color histogram probability distribution
NASA Astrophysics Data System (ADS)
Li, Ning; Lu, Tongwei; Zhang, Yanduo
2018-04-01
In order to resolve tracking failure resulted from target's being occlusion and follower jamming caused by objects similar to target in the background, reduce the influence of light intensity. This paper change HSV and YCbCr color channel correction the update center of the target, continuously updated image threshold self-adaptive target detection effect, Clustering the initial obstacles is roughly range, shorten the threshold range, maximum to detect the target. In order to improve the accuracy of detector, this paper increased the Kalman filter to estimate the target state area. The direction predictor based on the Markov model is added to realize the target state estimation under the condition of background color interference and enhance the ability of the detector to identify similar objects. The experimental results show that the improved algorithm more accurate and faster speed of processing.
Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion
Calabro, Finnegan J.; Vaina, Lucia Maria
2016-01-01
Background Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). Material/Methods 16 right handed healthy observers (ages 18–28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. Results Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. Conclusions These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion. PMID:27231114
Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion.
Calabro, Finnegan J; Vaina, Lucia Maria
2016-05-27
BACKGROUND Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). MATERIAL AND METHODS 16 right handed healthy observers (ages 18-28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. RESULTS Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. CONCLUSIONS These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion.
Recurrent neural network based virtual detection line
NASA Astrophysics Data System (ADS)
Kadikis, Roberts
2018-04-01
The paper proposes an efficient method for detection of moving objects in the video. The objects are detected when they cross a virtual detection line. Only the pixels of the detection line are processed, which makes the method computationally efficient. A Recurrent Neural Network processes these pixels. The machine learning approach allows one to train a model that works in different and changing outdoor conditions. Also, the same network can be trained for various detection tasks, which is demonstrated by the tests on vehicle and people counting. In addition, the paper proposes a method for semi-automatic acquisition of labeled training data. The labeling method is used to create training and testing datasets, which in turn are used to train and evaluate the accuracy and efficiency of the detection method. The method shows similar accuracy as the alternative efficient methods but provides greater adaptability and usability for different tasks.
NASA Astrophysics Data System (ADS)
Doko, Tomoko; Chen, Wenbo; Higuchi, Hiroyoshi
2016-06-01
Satellite tracking technology has been used to reveal the migration patterns and flyways of migratory birds. In general, bird migration can be classified according to migration status. These statuses include the wintering period, spring migration, breeding period, and autumn migration. To determine the migration status, periods of these statuses should be individually determined, but there is no objective method to define 'a threshold date' for when an individual bird changes its status. The research objective is to develop an effective and objective method to determine threshold dates of migration status based on satellite-tracked data. The developed method was named the "MATCHED (Migratory Analytical Time Change Easy Detection) method". In order to demonstrate the method, data acquired from satellite-tracked Tundra Swans were used. MATCHED method is composed by six steps: 1) dataset preparation, 2) time frame creation, 3) automatic identification, 4) visualization of change points, 5) interpretation, and 6) manual correction. Accuracy was tested. In general, MATCHED method was proved powerful to identify the change points between migration status as well as stopovers. Nevertheless, identifying "exact" threshold dates is still challenging. Limitation and application of this method was discussed.
Yang, Ping; Fan, Chenggui; Wang, Min; Fogelson, Noa; Li, Ling
2017-08-15
Object identity and location are bound together to form a unique integration that is maintained and processed in visual working memory (VWM). Changes in task-irrelevant object location have been shown to impair the retrieval of memorial representations and the detection of object identity changes. However, the neural correlates of this cognitive process remain largely unknown. In the present study, we aim to investigate the underlying brain activation during object color change detection and the modulatory effects of changes in object location and VWM load. To this end we used simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) recordings, which can reveal the neural activity with both high temporal and high spatial resolution. Subjects responded faster and with greater accuracy in the repeated compared to the changed object location condition, when a higher VWM load was utilized. These results support the spatial congruency advantage theory and suggest that it is more pronounced with higher VWM load. Furthermore, the spatial congruency effect was associated with larger posterior N1 activity, greater activation of the right inferior frontal gyrus (IFG) and less suppression of the right supramarginal gyrus (SMG), when object location was repeated compared to when it was changed. The ERP-fMRI integrative analysis demonstrated that the object location discrimination-related N1 component is generated in the right SMG. Copyright © 2017 Elsevier Inc. All rights reserved.
Change detection in satellite images
NASA Astrophysics Data System (ADS)
Thonnessen, U.; Hofele, G.; Middelmann, W.
2005-05-01
Change detection plays an important role in different military areas as strategic reconnaissance, verification of armament and disarmament control and damage assessment. It is the process of identifying differences in the state of an object or phenomenon by observing it at different times. The availability of spaceborne reconnaissance systems with high spatial resolution, multi spectral capabilities, and short revisit times offer new perspectives for change detection. Before performing any kind of change detection it is necessary to separate changes of interest from changes caused by differences in data acquisition parameters. In these cases it is necessary to perform a pre-processing to correct the data or to normalize it. Image registration and, corresponding to this task, the ortho-rectification of the image data is a further prerequisite for change detection. If feasible, a 1-to-1 geometric correspondence should be aspired for. Change detection on an iconic level with a succeeding interpretation of the changes by the observer is often proposed; nevertheless an automatic knowledge-based analysis delivering the interpretation of the changes on a semantic level should be the aim of the future. We present first results of change detection on a structural level concerning urban areas. After pre-processing, the images are segmented in areas of interest and structural analysis is applied to these regions to extract descriptions of urban infrastructure like buildings, roads and tanks of refineries. These descriptions are matched to detect changes and similarities.
Diffraction mode terahertz tomography
Ferguson, Bradley; Wang, Shaohong; Zhang, Xi-Cheng
2006-10-31
A method of obtaining a series of images of a three-dimensional object. The method includes the steps of transmitting pulsed terahertz (THz) radiation through the entire object from a plurality of angles, optically detecting changes in the transmitted THz radiation using pulsed laser radiation, and constructing a plurality of imaged slices of the three-dimensional object using the detected changes in the transmitted THz radiation. The THz radiation is transmitted through the object as a two-dimensional array of parallel rays. The optical detection is an array of detectors such as a CCD sensor.
Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2004-01-01
A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.
The elephant in the room: Inconsistency in scene viewing and representation.
Spotorno, Sara; Tatler, Benjamin W
2017-10-01
We examined the extent to which semantic informativeness, consistency with expectations and perceptual salience contribute to object prioritization in scene viewing and representation. In scene viewing (Experiments 1-2), semantic guidance overshadowed perceptual guidance in determining fixation order, with the greatest prioritization for objects that were diagnostic of the scene's depicted event. Perceptual properties affected selection of consistent objects (regardless of their informativeness) but not of inconsistent objects. Semantic and perceptual properties also interacted in influencing foveal inspection, as inconsistent objects were fixated longer than low but not high salience diagnostic objects. While not studied in direct competition with each other (each studied in competition with diagnostic objects), we found that inconsistent objects were fixated earlier and for longer than consistent but marginally informative objects. In change detection (Experiment 3), perceptual guidance overshadowed semantic guidance, promoting detection of highly salient changes. A residual advantage for diagnosticity over inconsistency emerged only when selection prioritization could not be based on low-level features. Overall these findings show that semantic inconsistency is not prioritized within a scene when competing with other relevant information that is essential to scene understanding and respects observers' expectations. Moreover, they reveal that the relative dominance of semantic or perceptual properties during selection depends on ongoing task requirements. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Unbound or distant planetary mass population detected by gravitational microlensing.
2011-05-19
Since 1995, more than 500 exoplanets have been detected using different techniques, of which 12 were detected with gravitational microlensing. Most of these are gravitationally bound to their host stars. There is some evidence of free-floating planetary-mass objects in young star-forming regions, but these objects are limited to massive objects of 3 to 15 Jupiter masses with large uncertainties in photometric mass estimates and their abundance. Here, we report the discovery of a population of unbound or distant Jupiter-mass objects, which are almost twice (1.8(+1.7)(-0.8)) as common as main-sequence stars, based on two years of gravitational microlensing survey observations towards the Galactic Bulge. These planetary-mass objects have no host stars that can be detected within about ten astronomical units by gravitational microlensing. However, a comparison with constraints from direct imaging suggests that most of these planetary-mass objects are not bound to any host star. An abrupt change in the mass function at about one Jupiter mass favours the idea that their formation process is different from that of stars and brown dwarfs. They may have formed in proto-planetary disks and subsequently scattered into unbound or very distant orbits.
Polarized object detection in crabs: a two-channel system.
Basnak, Melanie Ailín; Pérez-Schuster, Verónica; Hermitte, Gabriela; Berón de Astrada, Martín
2018-05-25
Many animal species take advantage of polarization vision for vital tasks such as orientation, communication and contrast enhancement. Previous studies have suggested that decapod crustaceans use a two-channel polarization system for contrast enhancement. Here, we characterize the polarization contrast sensitivity in a grapsid crab . We estimated the polarization contrast sensitivity of the animals by quantifying both their escape response and changes in heart rate when presented with polarized motion stimuli. The motion stimulus consisted of an expanding disk with an 82 deg polarization difference between the object and the background. More than 90% of animals responded by freezing or trying to avoid the polarized stimulus. In addition, we co-rotated the electric vector (e-vector) orientation of the light from the object and background by increments of 30 deg and found that the animals' escape response varied periodically with a 90 deg period. Maximum escape responses were obtained for object and background e-vectors near the vertical and horizontal orientations. Changes in cardiac response showed parallel results but also a minimum response when e-vectors of object and background were shifted by 45 deg with respect to the maxima. These results are consistent with an orthogonal receptor arrangement for the detection of polarized light, in which two channels are aligned with the vertical and horizontal orientations. It has been hypothesized that animals with object-based polarization vision rely on a two-channel detection system analogous to that of color processing in dichromats. Our results, obtained by systematically varying the e-vectors of object and background, provide strong empirical support for this theoretical model of polarized object detection. © 2018. Published by The Company of Biologists Ltd.
"Change deafness" arising from inter-feature masking within a single auditory object.
Barascud, Nicolas; Griffiths, Timothy D; McAlpine, David; Chait, Maria
2014-03-01
Our ability to detect prominent changes in complex acoustic scenes depends not only on the ear's sensitivity but also on the capacity of the brain to process competing incoming information. Here, employing a combination of psychophysics and magnetoencephalography (MEG), we investigate listeners' sensitivity in situations when two features belonging to the same auditory object change in close succession. The auditory object under investigation is a sequence of tone pips characterized by a regularly repeating frequency pattern. Signals consisted of an initial, regularly alternating sequence of three short (60 msec) pure tone pips (in the form ABCABC…) followed by a long pure tone with a frequency that is either expected based on the on-going regular pattern ("LONG expected"-i.e., "LONG-expected") or constitutes a pattern violation ("LONG-unexpected"). The change in LONG-expected is manifest as a change in duration (when the long pure tone exceeds the established duration of a tone pip), whereas the change in LONG-unexpected is manifest as a change in both the frequency pattern and a change in the duration. Our results reveal a form of "change deafness," in that although changes in both the frequency pattern and the expected duration appear to be processed effectively by the auditory system-cortical signatures of both changes are evident in the MEG data-listeners often fail to detect changes in the frequency pattern when that change is closely followed by a change in duration. By systematically manipulating the properties of the changing features and measuring behavioral and MEG responses, we demonstrate that feature changes within the same auditory object, which occur close together in time, appear to compete for perceptual resources.
Detecting single viruses and nanoparticles using whispering gallery microlasers.
He, Lina; Ozdemir, Sahin Kaya; Zhu, Jiangang; Kim, Woosung; Yang, Lan
2011-06-26
There is a strong demand for portable systems that can detect and characterize individual pathogens and other nanoscale objects without the use of labels, for applications in human health, homeland security, environmental monitoring and diagnostics. However, most nanoscale objects of interest have low polarizabilities due to their small size and low refractive index contrast with the surrounding medium. This leads to weak light-matter interactions, and thus makes the label-free detection of single nanoparticles very difficult. Micro- and nano-photonic devices have emerged as highly sensitive platforms for such applications, because the combination of high quality factor Q and small mode volume V leads to significantly enhanced light-matter interactions. For example, whispering gallery mode microresonators have been used to detect and characterize single influenza virions and polystyrene nanoparticles with a radius of 30 nm (ref. 12) by measuring in the transmission spectrum either the resonance shift or mode splitting induced by the nanoscale objects. Increasing Q leads to a narrower resonance linewidth, which makes it possible to resolve smaller changes in the transmission spectrum, and thus leads to improved performance. Here, we report a whispering gallery mode microlaser-based real-time and label-free detection method that can detect individual 15-nm-radius polystyrene nanoparticles, 10-nm gold nanoparticles and influenza A virions in air, and 30 nm polystyrene nanoparticles in water. Our approach relies on measuring changes in the beat note that is produced when an ultra-narrow emission line from a whispering gallery mode microlaser is split into two modes by a nanoscale object, and these two modes then interfere. The ultimate detection limit is set by the laser linewidth, which can be made much narrower than the resonance linewidth of any passive resonator. This means that microlaser sensors have the potential to detect objects that are too small to be detected by passive resonator sensors.
NASA Astrophysics Data System (ADS)
Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng
2007-11-01
As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.
Morphological operation based dense houses extraction from DSM
NASA Astrophysics Data System (ADS)
Li, Y.; Zhu, L.; Tachibana, K.; Shimamura, H.
2014-08-01
This paper presents a method of reshaping and extraction of markers and masks of the dense houses from the DSM based on mathematical morphology (MM). Houses in a digital surface model (DSM) are almost joined together in high-density housing areas, and most segmentation methods cannot completely separate them. We propose to label the markers of the buildings firstly and segment them into masks by watershed then. To avoid detecting more than one marker for a house or no marker at all due to its higher neighbour, the DSM is morphologically reshaped. It is carried out by a MM operation using the certain disk shape SE of the similar size to the houses. The sizes of the houses need to be estimated before reshaping. A granulometry generated by opening-by-reconstruction to the NDSM is proposed to detect the scales of the off-terrain objects. It is a histogram of the global volume of the top hats of the convex objects in the continuous scales. The obvious step change in the profile means that there are many objects of similar sizes occur at this scale. In reshaping procedure, the slices of the object are derived by morphological filtering at the detected continuous scales and reconstructed in pile as the dome. The markers are detected on the basis of the domes.
Clustering approaches to feature change detection
NASA Astrophysics Data System (ADS)
G-Michael, Tesfaye; Gunzburger, Max; Peterson, Janet
2018-05-01
The automated detection of changes occurring between multi-temporal images is of significant importance in a wide range of medical, environmental, safety, as well as many other settings. The usage of k-means clustering is explored as a means for detecting objects added to a scene. The silhouette score for the clustering is used to define the optimal number of clusters that should be used. For simple images having a limited number of colors, new objects can be detected by examining the change between the optimal number of clusters for the original and modified images. For more complex images, new objects may need to be identified by examining the relative areas covered by corresponding clusters in the original and modified images. Which method is preferable depends on the composition and range of colors present in the images. In addition to describing the clustering and change detection methodology of our proposed approach, we provide some simple illustrations of its application.
3D change detection - Approaches and applications
NASA Astrophysics Data System (ADS)
Qin, Rongjun; Tian, Jiaojiao; Reinartz, Peter
2016-12-01
Due to the unprecedented technology development of sensors, platforms and algorithms for 3D data acquisition and generation, 3D spaceborne, airborne and close-range data, in the form of image based, Light Detection and Ranging (LiDAR) based point clouds, Digital Elevation Models (DEM) and 3D city models, become more accessible than ever before. Change detection (CD) or time-series data analysis in 3D has gained great attention due to its capability of providing volumetric dynamics to facilitate more applications and provide more accurate results. The state-of-the-art CD reviews aim to provide a comprehensive synthesis and to simplify the taxonomy of the traditional remote sensing CD techniques, which mainly sit within the boundary of 2D image/spectrum analysis, largely ignoring the particularities of 3D aspects of the data. The inclusion of 3D data for change detection (termed 3D CD), not only provides a source with different modality for analysis, but also transcends the border of traditional top-view 2D pixel/object-based analysis to highly detailed, oblique view or voxel-based geometric analysis. This paper reviews the recent developments and applications of 3D CD using remote sensing and close-range data, in support of both academia and industry researchers who seek for solutions in detecting and analyzing 3D dynamics of various objects of interest. We first describe the general considerations of 3D CD problems in different processing stages and identify CD types based on the information used, being the geometric comparison and geometric-spectral analysis. We then summarize relevant works and practices in urban, environment, ecology and civil applications, etc. Given the broad spectrum of applications and different types of 3D data, we discuss important issues in 3D CD methods. Finally, we present concluding remarks in algorithmic aspects of 3D CD.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Shestakov, Ivan L.; Blednov, Roman G.
2017-05-01
One of urgent security problems is a detection of objects placed inside the human body. Obviously, for safety reasons one cannot use X-rays for such object detection widely and often. For this purpose, we propose to use THz camera and IR camera. Below we continue a possibility of IR camera using for a detection of temperature trace on a human body. In contrast to passive THz camera using, the IR camera does not allow to see very pronounced the object under clothing. Of course, this is a big disadvantage for a security problem solution based on the IR camera using. To find possible ways for this disadvantage overcoming we make some experiments with IR camera, produced by FLIR Company and develop novel approach for computer processing of images captured by IR camera. It allows us to increase a temperature resolution of IR camera as well as human year effective susceptibility enhancing. As a consequence of this, a possibility for seeing of a human body temperature changing through clothing appears. We analyze IR images of a person, which drinks water and eats chocolate. We follow a temperature trace on human body skin, caused by changing of temperature inside the human body. Some experiments are made with observing of temperature trace from objects placed behind think overall. Demonstrated results are very important for the detection of forbidden objects, concealed inside the human body, by using non-destructive control without using X-rays.
Tactile discrimination and representations of texture, shape, and softness
NASA Technical Reports Server (NTRS)
Srinivasan, M. A.; Lamotte, R. H.
1991-01-01
We present here some of the salient results on the tactual discriminabilities of human subjects obtained through psychophysical experiments, and the associated peripheral neural codes obtained through electrophysiological recordings from monkey single nerve fibers. Humans can detect the presence of a 2 micron high single dot on a smooth glass plate stroked on the skin, based on the responses of Meissner type rapidly adapting fibers (RAs). They can also detect a 0.06 micron high grating on the plate, owing to the response of Pacinian corpuscle fibers. Among all the possible representations of the shapes of objects, the surface curvature distribution seems to be the most relevant for tactile sensing. Slowly adapting fibers respond to both the change and rate of change of curvature of the skin surface at the most sensitive spot in their receptive fields, whereas RAs respond only to the rate of change of curvature. Human discriminability of compliance of objects depends on whether the object has a deformable or rigid surface. When the surface is deformable, the spatial pressure distribution within the contact region is dependent on object compliance, and hence information from cutaneous mechanoreceptors is sufficient for discrimination of subtle differences in compliance. When the surface is rigid, kinesthetic information is necessary for discrimination, and the discriminability is much poorer than that for objects with deformable surfaces.
NASA Astrophysics Data System (ADS)
Kruse, Christian; Rottensteiner, Franz; Hoberg, Thorsten; Ziems, Marcel; Rebke, Julia; Heipke, Christian
2018-04-01
The aftermath of wartime attacks is often felt long after the war ended, as numerous unexploded bombs may still exist in the ground. Typically, such areas are documented in so-called impact maps which are based on the detection of bomb craters. This paper proposes a method for the automatic detection of bomb craters in aerial wartime images that were taken during the Second World War. The object model for the bomb craters is represented by ellipses. A probabilistic approach based on marked point processes determines the most likely configuration of objects within the scene. Adding and removing new objects to and from the current configuration, respectively, changing their positions and modifying the ellipse parameters randomly creates new object configurations. Each configuration is evaluated using an energy function. High gradient magnitudes along the border of the ellipse are favored and overlapping ellipses are penalized. Reversible Jump Markov Chain Monte Carlo sampling in combination with simulated annealing provides the global energy optimum, which describes the conformance with a predefined model. For generating the impact map a probability map is defined which is created from the automatic detections via kernel density estimation. By setting a threshold, areas around the detections are classified as contaminated or uncontaminated sites, respectively. Our results show the general potential of the method for the automatic detection of bomb craters and its automated generation of an impact map in a heterogeneous image stock.
Adult Age Differences in Categorization and Multiple-Cue Judgment
ERIC Educational Resources Information Center
Mata, Rui; von Helversen, Bettina; Karlsson, Linnea; Cupper, Lutz
2012-01-01
We often need to infer unknown properties of objects from observable ones, just like detectives must infer guilt from observable clues and behavior. But how do inferential processes change with age? We examined young and older adults' reliance on rule-based and similarity-based processes in an inference task that can be considered either a…
Kushki, Azadeh; Khan, Ajmal; Brian, Jessica; Anagnostou, Evdokia
2015-03-01
Anxiety is associated with physiological changes that can be noninvasively measured using inexpensive and wearable sensors. These changes provide an objective and language-free measure of arousal associated with anxiety, which can complement treatment programs for clinical populations who have difficulty with introspection, communication, and emotion recognition. This motivates the development of automatic methods for detection of anxiety-related arousal using physiology signals. While several supervised learning methods have been proposed for this purpose, these methods require regular collection and updating of training data and are, therefore, not suitable for clinical populations, where obtaining labelled data may be challenging due to impairments in communication and introspection. In this context, the objective of this paper is to develop an unsupervised and real-time arousal detection algorithm. We propose a learning framework based on the Kalman filtering theory for detection of physiological arousal based on cardiac activity. The performance of the system was evaluated on data obtained from a sample of children with autism spectrum disorder. The results indicate that the system can detect anxiety-related arousal in these children with sensitivity and specificity of 99% and 92%, respectively. Our results show that the proposed method can detect physiological arousal associated with anxiety with high accuracy, providing support for technical feasibility of augmenting anxiety treatments with automatic detection techniques. This approach can ultimately lead to more effective anxiety treatment for a larger and more diverse population.
Analysis of image heterogeneity using 2D Minkowski functionals detects tumor responses to treatment.
Larkin, Timothy J; Canuto, Holly C; Kettunen, Mikko I; Booth, Thomas C; Hu, De-En; Krishnan, Anant S; Bohndiek, Sarah E; Neves, André A; McLachlan, Charles; Hobson, Michael P; Brindle, Kevin M
2014-01-01
The acquisition of ever increasing volumes of high resolution magnetic resonance imaging (MRI) data has created an urgent need to develop automated and objective image analysis algorithms that can assist in determining tumor margins, diagnosing tumor stage, and detecting treatment response. We have shown previously that Minkowski functionals, which are precise morphological and structural descriptors of image heterogeneity, can be used to enhance the detection, in T1 -weighted images, of a targeted Gd(3+) -chelate-based contrast agent for detecting tumor cell death. We have used Minkowski functionals here to characterize heterogeneity in T2 -weighted images acquired before and after drug treatment, and obtained without contrast agent administration. We show that Minkowski functionals can be used to characterize the changes in image heterogeneity that accompany treatment of tumors with a vascular disrupting agent, combretastatin A4-phosphate, and with a cytotoxic drug, etoposide. Parameterizing changes in the heterogeneity of T2 -weighted images can be used to detect early responses of tumors to drug treatment, even when there is no change in tumor size. The approach provides a quantitative and therefore objective assessment of treatment response that could be used with other types of MR image and also with other imaging modalities. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Kranz, Olaf; Lang, Stefan; Schoepfer, Elisabeth
2017-09-01
Mining natural resources serve fundamental societal needs or commercial interests, but it may well turn into a driver of violence and regional instability. In this study, very high resolution (VHR) optical stereo satellite data are analysed to monitor processes and changes in one of the largest artisanal and small-scale mining sites in the Democratic Republic of the Congo, which is among the world's wealthiest countries in exploitable minerals To identify the subtle structural changes, the applied methodological framework employs object-based change detection (OBCD) based on optical VHR data and generated digital surface models (DSM). Results prove the DSM-based change detection approach enhances the assessment gained from sole 2D analyses by providing valuable information about changes in surface structure or volume. Land cover changes as analysed by OBCD reveal an increase in bare soil area by a rate of 47% between April 2010 and September 2010, followed by a significant decrease of 47.5% until March 2015. Beyond that, DSM differencing enabled the characterisation of small-scale features such as pits and excavations. The presented Earth observation (EO)-based monitoring of mineral exploitation aims at a better understanding of the relations between resource extraction and conflict, and thus providing relevant information for potential mitigation strategies and peace building.
NASA Astrophysics Data System (ADS)
Goldstein, N.; Dressler, R. A.; Richtsmeier, S. S.; McLean, J.; Dao, P. D.; Murray-Krezan, J.; Fulcoly, D. O.
2013-09-01
Recent ground testing of a wide area camera system and automated star removal algorithms has demonstrated the potential to detect, quantify, and track deep space objects using small aperture cameras and on-board processors. The camera system, which was originally developed for a space-based Wide Area Space Surveillance System (WASSS), operates in a fixed-stare mode, continuously monitoring a wide swath of space and differentiating celestial objects from satellites based on differential motion across the field of view. It would have greatest utility in a LEO orbit to provide automated and continuous monitoring of deep space with high refresh rates, and with particular emphasis on the GEO belt and GEO transfer space. Continuous monitoring allows a concept of change detection and custody maintenance not possible with existing sensors. The detection approach is equally applicable to Earth-based sensor systems. A distributed system of such sensors, either Earth-based, or space-based, could provide automated, persistent night-time monitoring of all of deep space. The continuous monitoring provides a daily record of the light curves of all GEO objects above a certain brightness within the field of view. The daily updates of satellite light curves offers a means to identify specific satellites, to note changes in orientation and operational mode, and to queue other SSA assets for higher resolution queries. The data processing approach may also be applied to larger-aperture, higher resolution camera systems to extend the sensitivity towards dimmer objects. In order to demonstrate the utility of the WASSS system and data processing, a ground based field test was conducted in October 2012. We report here the results of the observations made at Magdalena Ridge Observatory using the prototype WASSS camera, which has a 4×60° field-of-view , <0.05° resolution, a 2.8 cm2 aperture, and the ability to view within 4° of the sun. A single camera pointed at the GEO belt provided a continuous night-long record of the intensity and location of more than 50 GEO objects detected within the camera's 60-degree field-of-view, with a detection sensitivity similar to the camera's shot noise limit of Mv=13.7. Performance is anticipated to scale with aperture area, allowing the detection of dimmer objects with larger-aperture cameras. The sensitivity of the system depends on multi-frame averaging and an image processing algorithm that exploits the different angular velocities of celestial objects and SOs. Principal Components Analysis (PCA) is used to filter out all objects moving with the velocity of the celestial frame of reference. The resulting filtered images are projected back into an Earth-centered frame of reference, or into any other relevant frame of reference, and co-added to form a series of images of the GEO objects as a function of time. The PCA approach not only removes the celestial background, but it also removes systematic variations in system calibration, sensor pointing, and atmospheric conditions. The resulting images are shot-noise limited, and can be exploited to automatically identify deep space objects, produce approximate state vectors, and track their locations and intensities as a function of time.
Object Detection Applied to Indoor Environments for Mobile Robot Navigation.
Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón
2016-07-28
To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests.
Object Detection Applied to Indoor Environments for Mobile Robot Navigation
Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón
2016-01-01
To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests. PMID:27483264
NASA Astrophysics Data System (ADS)
Guidang, Excel Philip B.; Llanda, Christopher John R.; Palaoag, Thelma D.
2018-03-01
Face Detection Technique as a strategy in controlling a multimedia instructional material was implemented in this study. Specifically, it achieved the following objectives: 1) developed a face detection application that controls an embedded mother-tongue-based instructional material for face-recognition configuration using Python; 2) determined the perceptions of the students using the Mutt Susan’s student app review rubric. The study concludes that face detection technique is effective in controlling an electronic instructional material. It can be used to change the method of interaction of the student with an instructional material. 90% of the students perceived the application to be a great app and 10% rated the application to be good.
Damage Detection and Verification System (DDVS) for In-Situ Health Monitoring
NASA Technical Reports Server (NTRS)
Williams, Martha K.; Lewis, Mark; Szafran, J.; Shelton, C.; Ludwig, L.; Gibson, T.; Lane, J.; Trautwein, T.
2015-01-01
Project presentation for Game Changing Program Smart Book Release. Detection and Verification System (DDVS) expands the Flat Surface Damage Detection System (FSDDS) sensory panels damage detection capabilities and includes an autonomous inspection capability utilizing cameras and dynamic computer vision algorithms to verify system health. Objectives of this formulation task are to establish the concept of operations, formulate the system requirements for a potential ISS flight experiment, and develop a preliminary design of an autonomous inspection capability system that will be demonstrated as a proof-of-concept ground based damage detection and inspection system.
Visual Salience in the Change Detection Paradigm: The Special Role of Object Onset
ERIC Educational Resources Information Center
Cole, Geoff G.; Kentridge, Robert W.; Heywood, Charles A.
2004-01-01
The relative efficacy with which appearance of a new object orients visual attention was investigated. At issue is whether the visual system treats onset as being of particular importance or only 1 of a number of stimulus events equally likely to summon attention. Using the 1-shot change detection paradigm, the authors compared detectability of…
Zhao, Nan; Chen, Wenfeng; Xuan, Yuming; Mehler, Bruce; Reimer, Bryan; Fu, Xiaolan
2014-01-01
The 'looked-but-failed-to-see' phenomenon is crucial to driving safety. Previous research utilising change detection tasks related to driving has reported inconsistent effects of driver experience on the ability to detect changes in static driving scenes. Reviewing these conflicting results, we suggest that drivers' increased ability to detect changes will only appear when the task requires a pattern of visual attention distribution typical of actual driving. By adding a distant fixation point on the road image, we developed a modified change blindness paradigm and measured detection performance of drivers and non-drivers. Drivers performed better than non-drivers only in scenes with a fixation point. Furthermore, experience effect interacted with the location of the change and the relevance of the change to driving. These results suggest that learning associated with driving experience reflects increased skill in the efficient distribution of visual attention across both the central focus area and peripheral objects. This article provides an explanation for the previously conflicting reports of driving experience effects in change detection tasks. We observed a measurable benefit of experience in static driving scenes, using a modified change blindness paradigm. These results have translational opportunities for picture-based training and testing tools to improve driver skill.
Objective response detection in an electroencephalogram during somatosensory stimulation.
Simpson, D M; Tierra-Criollo, C J; Leite, R T; Zayen, E J; Infantosi, A F
2000-06-01
Techniques for objective response detection aim to identify the presence of evoked potentials based purely on statistical principles. They have been shown to be potentially more sensitive than the conventional approach of subjective evaluation by experienced clinicians and could be of great clinical use. Three such techniques to detect changes in an electroencephalogram (EEG) synchronous with the stimuli, namely, magnitude-squared coherence (MSC), the phase-synchrony measure (PSM) and the spectral F test (SFT) were applied to EEG signals of 12 normal subjects under conventional somatosensory pulse stimulation to the tibial nerve. The SFT, which uses only the power spectrum, showed the poorest performance, while the PSM, based only on the phase spectrum, gave results almost as good as those of the MSC, which uses both phase and power spectra. With the latter two techniques, stimulus responses were evident in the frequency range of 20-80 Hz in all subjects after 200 stimuli (5 Hz stimulus frequency), whereas for visual recognition at least 500 stimuli are usually applied. Based on these results and on simulations, the phase-based techniques appear promising for the automated detection and monitoring of somatosensory evoked potentials.
Research on Daily Objects Detection Based on Deep Neural Network
NASA Astrophysics Data System (ADS)
Ding, Sheng; Zhao, Kun
2018-03-01
With the rapid development of deep learning, great breakthroughs have been made in the field of object detection. In this article, the deep learning algorithm is applied to the detection of daily objects, and some progress has been made in this direction. Compared with traditional object detection methods, the daily objects detection method based on deep learning is faster and more accurate. The main research work of this article: 1. collect a small data set of daily objects; 2. in the TensorFlow framework to build different models of object detection, and use this data set training model; 3. the training process and effect of the model are improved by fine-tuning the model parameters.
NASA Technical Reports Server (NTRS)
Frederick, J. E.; Heath, D. F.; Cebula, R. P.
1986-01-01
The scientific objective of unambiguously detecting subtle global trends in upper stratospheric ozone requires that one maintains a thorough understanding of the satellite-based remote sensors intended for this task. The instrument now in use for long term ozone monitoring is the SBUV/2 being flown on NOAA operational satellites. A critical activity in the data interpretation involves separating small changes in measurement sensitivity from true atmospheric variability. By defining the specific issues that must be addressed and presenting results derived early in the mission of the first SBUV/2 flight model, this work serves as a guide to the instrument investigations that are essential in the attempt to detect long-term changes in the ozone layer.
Sanguinetti-Scheck, Juan Ignacio; Pedraja, Eduardo Federico; Cilleruelo, Esteban; Migliaro, Adriana; Aguilera, Pedro; Caputi, Angel Ariel; Budelli, Ruben
2011-01-01
Active electroreception in Gymnotus omarorum is a sensory modality that perceives the changes that nearby objects cause in a self generated electric field. The field is emitted as repetitive stereotyped pulses that stimulate skin electroreceptors. Differently from mormyriformes electric fish, gymnotiformes have an electric organ distributed along a large portion of the body, which fires sequentially. As a consequence shape and amplitude of both, the electric field generated and the image of objects, change during the electric pulse. To study how G. omarorum constructs a perceptual representation, we developed a computational model that allows the determination of the self-generated field and the electric image. We verify and use the model as a tool to explore image formation in diverse experimental circumstances. We show how the electric images of objects change in shape as a function of time and position, relative to the fish's body. We propose a theoretical framework about the organization of the different perceptive tasks made by electroreception: 1) At the head region, where the electrosensory mosaic presents an electric fovea, the field polarizing nearby objects is coherent and collimated. This favors the high resolution sampling of images of small objects and perception of electric color. Besides, the high sensitivity of the fovea allows the detection and tracking of large faraway objects in rostral regions. 2) In the trunk and tail region a multiplicity of sources illuminate different regions of the object, allowing the characterization of the shape and position of a large object. In this region, electroreceptors are of a unique type and capacitive detection should be based in the pattern of the afferents response. 3) Far from the fish, active electroreception is not possible but the collimated field is suitable to be used for electrocommunication and detection of large objects at the sides and caudally. PMID:22096578
Lee, Young-Sook; Chung, Wan-Young
2012-01-01
Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486
Schwarzkopf, Dietrich S.; Bahrami, Bahador; Fleming, Stephen M.; Jackson, Ben M.; Goch, Tristam J. C.; Saygin, Ayse P.; Miller, Luke E.; Pappa, Katerina; Pavisic, Ivanna; Schade, Rachel N.; Noyce, Alastair J.; Crutch, Sebastian J.; O'Keeffe, Aidan G.; Schrag, Anette E.; Morris, Huw R.
2018-01-01
ABSTRACT Background: People with Parkinson's disease (PD) who develop visuo‐perceptual deficits are at higher risk of dementia, but we lack tests that detect subtle visuo‐perceptual deficits and can be performed by untrained personnel. Hallucinations are associated with cognitive impairment and typically involve perception of complex objects. Changes in object perception may therefore be a sensitive marker of visuo‐perceptual deficits in PD. Objective: We developed an online platform to test visuo‐perceptual function. We hypothesised that (1) visuo‐perceptual deficits in PD could be detected using online tests, (2) object perception would be preferentially affected, and (3) these deficits would be caused by changes in perception rather than response bias. Methods: We assessed 91 people with PD and 275 controls. Performance was compared using classical frequentist statistics. We then fitted a hierarchical Bayesian signal detection theory model to a subset of tasks. Results: People with PD were worse than controls at object recognition, showing no deficits in other visuo‐perceptual tests. Specifically, they were worse at identifying skewed images (P < .0001); at detecting hidden objects (P = .0039); at identifying objects in peripheral vision (P < .0001); and at detecting biological motion (P = .0065). In contrast, people with PD were not worse at mental rotation or subjective size perception. Using signal detection modelling, we found this effect was driven by change in perceptual sensitivity rather than response bias. Conclusions: Online tests can detect visuo‐perceptual deficits in people with PD, with object recognition particularly affected. Ultimately, visuo‐perceptual tests may be developed to identify at‐risk patients for clinical trials to slow PD dementia. © 2018 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. PMID:29473691
Visual-Spatial Attention Aids the Maintenance of Object Representations in Visual Working Memory
Williams, Melonie; Pouget, Pierre; Boucher, Leanne; Woodman, Geoffrey F.
2013-01-01
Theories have proposed that the maintenance of object representations in visual working memory is aided by a spatial rehearsal mechanism. In this study, we used two different approaches to test the hypothesis that overt and covert visual-spatial attention mechanisms contribute to the maintenance of object representations in visual working memory. First, we tracked observers’ eye movements while remembering a variable number of objects during change-detection tasks. We observed that during the blank retention interval, participants spontaneously shifted gaze to the locations that the objects had occupied in the memory array. Next, we hypothesized that if attention mechanisms contribute to the maintenance of object representations, then drawing attention away from the object locations during the retention interval would impair object memory during these change-detection tasks. Supporting this prediction, we found that attending to the fixation point in anticipation of a brief probe stimulus during the retention interval reduced change-detection accuracy even on the trials in which no probe occurred. These findings support models of working memory in which visual-spatial selection mechanisms contribute to the maintenance of object representations. PMID:23371773
LISA: Astrophysics Out to z Approximately 10 with Low-Frequency Gravitational Waves
NASA Technical Reports Server (NTRS)
Stebbins, Robin T.
2008-01-01
This viewgraph presentation reviews the Laser Interferometer Space Antenna (LISA). LISA os a joint ESA-NASA project to design, build and operate a space-based gravitational wave detector. The 5 million Kilometer long detector will consist of three spacecraft orbiting the Sun in a triangular formation. Space-Time strains induced by gravitational waves are detected by measuring changes in the separation of fiducial masses with laser interferometry. LISA is expected to detect signals from merging massive black holes, compact stellar objects spiraling into super massive black holes in galactic nuclei, thousands of close binaries of compact objects in the Milky way and possible backgrounds of cosmological origin.
Vibration-based monitoring to detect mass changes in satellites
NASA Astrophysics Data System (ADS)
Maji, Arup; Vernon, Breck
2012-04-01
Vibration-based structural health monitoring could be a useful form of determining the health and safety of space structures. A particular concern is the possibility of a foreign object that attaches itself to a satellite in orbit for adverse reasons. A frequency response analysis was used to determine the changes in mass and moment of inertia of the space structure based on a change in the natural frequencies of the structure or components of the structure. Feasibility studies were first conducted on a 7 in x 19 in aluminum plate with various boundary conditions. Effect of environmental conditions on the frequency response was determined. The baseline frequency response for the plate was then used as the basis for detection of the addition, and possibly the location, of added masses on the plate. The test results were compared to both analytical solutions and finite element models created in SAP2000. The testing was subsequently expanded to aluminum alloy satellite panels and a mock satellite with dummy payloads. Statistical analysis was conducted on variations of frequency due to added mass and thermal changes to determine the threshold of added mass that can be detected.
Building Change Detection in Very High Resolution Satellite Stereo Image Time Series
NASA Astrophysics Data System (ADS)
Tian, J.; Qin, R.; Cerra, D.; Reinartz, P.
2016-06-01
There is an increasing demand for robust methods on urban sprawl monitoring. The steadily increasing number of high resolution and multi-view sensors allows producing datasets with high temporal and spatial resolution; however, less effort has been dedicated to employ very high resolution (VHR) satellite image time series (SITS) to monitor the changes in buildings with higher accuracy. In addition, these VHR data are often acquired from different sensors. The objective of this research is to propose a robust time-series data analysis method for VHR stereo imagery. Firstly, the spatial-temporal information of the stereo imagery and the Digital Surface Models (DSMs) generated from them are combined, and building probability maps (BPM) are calculated for all acquisition dates. In the second step, an object-based change analysis is performed based on the derivative features of the BPM sets. The change consistence between object-level and pixel-level are checked to remove any outlier pixels. Results are assessed on six pairs of VHR satellite images acquired within a time span of 7 years. The evaluation results have proved the efficiency of the proposed method.
NASA Astrophysics Data System (ADS)
Yang, C. H.; Kenduiywo, B. K.; Soergel, U.
2016-06-01
Persistent Scatterer Interferometry (PSI) is a technique to detect a network of extracted persistent scatterer (PS) points which feature temporal phase stability and strong radar signal throughout time-series of SAR images. The small surface deformations on such PS points are estimated. PSI particularly works well in monitoring human settlements because regular substructures of man-made objects give rise to large number of PS points. If such structures and/or substructures substantially alter or even vanish due to big change like construction, their PS points are discarded without additional explorations during standard PSI procedure. Such rejected points are called big change (BC) points. On the other hand, incoherent change detection (ICD) relies on local comparison of multi-temporal images (e.g. image difference, image ratio) to highlight scene modifications of larger size rather than detail level. However, image noise inevitably degrades ICD accuracy. We propose a change detection approach based on PSI to synergize benefits of PSI and ICD. PS points are extracted by PSI procedure. A local change index is introduced to quantify probability of a big change for each point. We propose an automatic thresholding method adopting change index to extract BC points along with a clue of the period they emerge. In the end, PS ad BC points are integrated into a change detection image. Our method is tested at a site located around north of Berlin main station where steady, demolished, and erected building substructures are successfully detected. The results are consistent with ground truth derived from time-series of aerial images provided by Google Earth. In addition, we apply our technique for traffic infrastructure, business district, and sports playground monitoring.
Ti, Chaoyang; Ho-Thanh, Minh-Tri; Wen, Qi; Liu, Yuxiang
2017-10-13
Position detection with high accuracy is crucial for force calibration of optical trapping systems. Most existing position detection methods require high-numerical-aperture objective lenses, which are bulky, expensive, and difficult to miniaturize. Here, we report an affordable objective-lens-free, fiber-based position detection scheme with 2 nm spatial resolution and 150 MHz bandwidth. This fiber based detection mechanism enables simultaneous trapping and force measurements in a compact fiber optical tweezers system. In addition, we achieved more reliable signal acquisition with less distortion compared with objective based position detection methods, thanks to the light guiding in optical fibers and small distance between the fiber tips and trapped particle. As a demonstration of the fiber based detection, we used the fiber optical tweezers to apply a force on a cell membrane and simultaneously measure the cellular response.
ERIC Educational Resources Information Center
Yang, Cheng-Ta
2011-01-01
Change detection requires perceptual comparison and decision processes on different features of multiattribute objects. How relative salience between two feature-changes influences the processes has not been addressed. This study used the systems factorial technology to investigate the processes when detecting changes in a Gabor patch with visual…
The Development of Change Detection
ERIC Educational Resources Information Center
Shore, David I.; Burack, Jacob A.; Miller, Danny; Joseph, Shari; Enns, James T.
2006-01-01
Changes to a scene often go unnoticed if the objects of the change are unattended, making change detection an index of where attention is focused during scene perception. We measured change detection in school-age children and young adults by repeatedly alternating two versions of an image. To provide an age-fair assessment we used a bimanual…
Bricher, Phillippa K.; Lucieer, Arko; Shaw, Justine; Terauds, Aleks; Bergstrom, Dana M.
2013-01-01
Monitoring changes in the distribution and density of plant species often requires accurate and high-resolution baseline maps of those species. Detecting such change at the landscape scale is often problematic, particularly in remote areas. We examine a new technique to improve accuracy and objectivity in mapping vegetation, combining species distribution modelling and satellite image classification on a remote sub-Antarctic island. In this study, we combine spectral data from very high resolution WorldView-2 satellite imagery and terrain variables from a high resolution digital elevation model to improve mapping accuracy, in both pixel- and object-based classifications. Random forest classification was used to explore the effectiveness of these approaches on mapping the distribution of the critically endangered cushion plant Azorella macquariensis Orchard (Apiaceae) on sub-Antarctic Macquarie Island. Both pixel- and object-based classifications of the distribution of Azorella achieved very high overall validation accuracies (91.6–96.3%, κ = 0.849–0.924). Both two-class and three-class classifications were able to accurately and consistently identify the areas where Azorella was absent, indicating that these maps provide a suitable baseline for monitoring expected change in the distribution of the cushion plants. Detecting such change is critical given the threats this species is currently facing under altering environmental conditions. The method presented here has applications to monitoring a range of species, particularly in remote and isolated environments. PMID:23940805
Saliency predicts change detection in pictures of natural scenes.
Wright, Michael J
2005-01-01
It has been proposed that the visual system encodes the salience of objects in the visual field in an explicit two-dimensional map that guides visual selective attention. Experiments were conducted to determine whether salience measurements applied to regions of pictures of outdoor scenes could predict the detection of changes in those regions. To obtain a quantitative measure of change detection, observers located changes in pairs of colour pictures presented across an interstimulus interval (ISI). Salience measurements were then obtained from different observers for image change regions using three independent methods, and all were positively correlated with change detection. Factor analysis extracted a single saliency factor that accounted for 62% of the variance contained in the four measures. Finally, estimates of the magnitude of the image change in each picture pair were obtained, using nine separate visual filters representing low-level vision features (luminance, colour, spatial frequency, orientation, edge density). None of the feature outputs was significantly associated with change detection or saliency. On the other hand it was shown that high-level (structural) properties of the changed region were related to saliency and to change detection: objects were more salient than shadows and more detectable when changed.
Research on moving object detection based on frog's eyes
NASA Astrophysics Data System (ADS)
Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan
2008-12-01
On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.
Development of a novel polymeric fiber-optic magnetostrictive metal detector.
Hua, Wei-Shu; Hooks, Joshua Rosenberg; Wu, Wen-Jong; Wang, Wei-Chih
2010-01-01
The purpose this paper is the development a novel polymeric fiber-optic magnetostrictive metal detector, using a fiber-optic Mach-Zehnder interferometer and polymeric magnetostrictive material. Metal detection is based on the strain-induced optical path length change steming from the ferromagnetic material introduced in the magnetic field. Varied optical phase shifts resulted largely from different metal objects. In this paper, the preliminary results on the different metal material detection will be discussed.
Region-Based Building Rooftop Extraction and Change Detection
NASA Astrophysics Data System (ADS)
Tian, J.; Metzlaff, L.; d'Angelo, P.; Reinartz, P.
2017-09-01
Automatic extraction of building changes is important for many applications like disaster monitoring and city planning. Although a lot of research work is available based on 2D as well as 3D data, an improvement in accuracy and efficiency is still needed. The introducing of digital surface models (DSMs) to building change detection has strongly improved the resulting accuracy. In this paper, a post-classification approach is proposed for building change detection using satellite stereo imagery. Firstly, DSMs are generated from satellite stereo imagery and further refined by using a segmentation result obtained from the Sobel gradients of the panchromatic image. Besides the refined DSMs, the panchromatic image and the pansharpened multispectral image are used as input features for mean-shift segmentation. The DSM is used to calculate the nDSM, out of which the initial building candidate regions are extracted. The candidate mask is further refined by morphological filtering and by excluding shadow regions. Following this, all segments that overlap with a building candidate region are determined. A building oriented segments merging procedure is introduced to generate a final building rooftop mask. As the last step, object based change detection is performed by directly comparing the building rooftops extracted from the pre- and after-event imagery and by fusing the change indicators with the roof-top region map. A quantitative and qualitative assessment of the proposed approach is provided by using WorldView-2 satellite data from Istanbul, Turkey.
Dissociable loss of the representations in visual short-term memory.
Li, Jie
2016-01-01
The present study investigated in what manner the information in visual short-term memory (VSTM) is lost. Participants memorized four items, one of which was given higher priority later by a retro-cue. Then participants were required to detect a possible change, which could be either a large or small change, occurred to one of the items. The results showed that the detection performance for the small change of the uncued items was poorer than the cued item, yet large change that occurred to all four memory items could be detected perfectly, indicating that the uncued representations lost some detailed information yet still had some basic features retained in VSTM. The present study suggests that after being encoded into VSTM, the information is not lost in an object-based manner; rather, features of an item are still dissociable, so that they can be lost separately.
Al-Nawashi, Malek; Al-Hazaimeh, Obaida M; Saraee, Mohamad
2017-01-01
Abnormal activity detection plays a crucial role in surveillance applications, and a surveillance system that can perform robustly in an academic environment has become an urgent need. In this paper, we propose a novel framework for an automatic real-time video-based surveillance system which can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment. To develop our system, we have divided the work into three phases: preprocessing phase, abnormal human activity detection phase, and content-based image retrieval phase. For motion object detection, we used the temporal-differencing algorithm and then located the motions region using the Gaussian function. Furthermore, the shape model based on OMEGA equation was used as a filter for the detected objects (i.e., human and non-human). For object activities analysis, we evaluated and analyzed the human activities of the detected objects. We classified the human activities into two groups: normal activities and abnormal activities based on the support vector machine. The machine then provides an automatic warning in case of abnormal human activities. It also embeds a method to retrieve the detected object from the database for object recognition and identification using content-based image retrieval. Finally, a software-based simulation using MATLAB was performed and the results of the conducted experiments showed an excellent surveillance system that can simultaneously perform the tracking, semantic scene learning, and abnormality detection in an academic environment with no human intervention.
The Effect of Consistency on Short-Term Memory for Scenes.
Gong, Mingliang; Xuan, Yuming; Xu, Xinwen; Fu, Xiaolan
2017-01-01
Which is more detectable, the change of a consistent or an inconsistent object in a scene? This question has been debated for decades. We noted that the change of objects in scenes might simultaneously be accompanied with gist changes. In the present study we aimed to examine how the alteration of gist, as well as the consistency of the changed objects, modulated change detection. In Experiment 1, we manipulated the semantic content by either keeping or changing the consistency of the scene. Results showed that the changes of consistent and inconsistent scenes were equally detected. More importantly, the changes were more accurately detected when scene consistency changed than when the consistency remained unchanged, regardless of the consistency of the memory scenes. A phase-scrambled version of stimuli was adopted in Experiment 2 to decouple the possible confounding effect of low-level factors. The results of Experiment 2 demonstrated that the effect found in Experiment 1 was indeed due to the change of high-level semantic consistency rather than the change of low-level physical features. Together, the study suggests that the change of consistency plays an important role in scene short-term memory, which might be attributed to the sensitivity to the change of semantic content.
The Effect of Consistency on Short-Term Memory for Scenes
Gong, Mingliang; Xuan, Yuming; Xu, Xinwen; Fu, Xiaolan
2017-01-01
Which is more detectable, the change of a consistent or an inconsistent object in a scene? This question has been debated for decades. We noted that the change of objects in scenes might simultaneously be accompanied with gist changes. In the present study we aimed to examine how the alteration of gist, as well as the consistency of the changed objects, modulated change detection. In Experiment 1, we manipulated the semantic content by either keeping or changing the consistency of the scene. Results showed that the changes of consistent and inconsistent scenes were equally detected. More importantly, the changes were more accurately detected when scene consistency changed than when the consistency remained unchanged, regardless of the consistency of the memory scenes. A phase-scrambled version of stimuli was adopted in Experiment 2 to decouple the possible confounding effect of low-level factors. The results of Experiment 2 demonstrated that the effect found in Experiment 1 was indeed due to the change of high-level semantic consistency rather than the change of low-level physical features. Together, the study suggests that the change of consistency plays an important role in scene short-term memory, which might be attributed to the sensitivity to the change of semantic content. PMID:29046654
Real-time moving objects detection and tracking from airborne infrared camera
NASA Astrophysics Data System (ADS)
Zingoni, Andrea; Diani, Marco; Corsini, Giovanni
2017-10-01
Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its capability to work in real-time.
A Space Object Detection Algorithm using Fourier Domain Likelihood Ratio Test
NASA Astrophysics Data System (ADS)
Becker, D.; Cain, S.
Space object detection is of great importance in the highly dependent yet competitive and congested space domain. Detection algorithms employed play a crucial role in fulfilling the detection component in the situational awareness mission to detect, track, characterize and catalog unknown space objects. Many current space detection algorithms use a matched filter or a spatial correlator to make a detection decision at a single pixel point of a spatial image based on the assumption that the data follows a Gaussian distribution. This paper explores the potential for detection performance advantages when operating in the Fourier domain of long exposure images of small and/or dim space objects from ground based telescopes. A binary hypothesis test is developed based on the joint probability distribution function of the image under the hypothesis that an object is present and under the hypothesis that the image only contains background noise. The detection algorithm tests each pixel point of the Fourier transformed images to make the determination if an object is present based on the criteria threshold found in the likelihood ratio test. Using simulated data, the performance of the Fourier domain detection algorithm is compared to the current algorithm used in space situational awareness applications to evaluate its value.
Enhanced data validation strategy of air quality monitoring network.
Harkat, Mohamed-Faouzi; Mansouri, Majdi; Nounou, Mohamed; Nounou, Hazem
2018-01-01
Quick validation and detection of faults in measured air quality data is a crucial step towards achieving the objectives of air quality networks. Therefore, the objectives of this paper are threefold: (i) to develop a modeling technique that can be used to predict the normal behavior of air quality variables and help provide accurate reference for monitoring purposes; (ii) to develop fault detection method that can effectively and quickly detect any anomalies in measured air quality data. For this purpose, a new fault detection method that is based on the combination of generalized likelihood ratio test (GLRT) and exponentially weighted moving average (EWMA) will be developed. GLRT is a well-known statistical fault detection method that relies on maximizing the detection probability for a given false alarm rate. In this paper, we propose to develop GLRT-based EWMA fault detection method that will be able to detect the changes in the values of certain air quality variables; (iii) to develop fault isolation and identification method that allows defining the fault source(s) in order to properly apply appropriate corrective actions. In this paper, reconstruction approach that is based on Midpoint-Radii Principal Component Analysis (MRPCA) model will be developed to handle the types of data and models associated with air quality monitoring networks. All air quality modeling, fault detection, fault isolation and reconstruction methods developed in this paper will be validated using real air quality data (such as particulate matter, ozone, nitrogen and carbon oxides measurement). Copyright © 2017 Elsevier Inc. All rights reserved.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-03-23
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-01-01
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690
The Feasibility Evaluation of Land Use Change Detection Using GAOFEN-3 Data
NASA Astrophysics Data System (ADS)
Huang, G.; Sun, Y.; Zhao, Z.
2018-04-01
GaoFen-3 (GF-3) satellite, is the first C band and multi-polarimetric Synthetic Aperture Radar (SAR) satellite in China. In order to explore the feasibility of GF-3 satellite in remote sensing interpretation and land-use remote sensing change detection, taking Guangzhou, China as a study area, the full polarimetric image of GF-3 satellite with 8 m resolution of two temporal as the data source. Firstly, the image is pre-processed by orthorectification, image registration and mosaic, and the land-use remote sensing digital orthophoto map (DOM) in 2017 is made according to the each county. Then the classification analysis and judgment of ground objects on the image are carried out by means of ArcGIS combining with the auxiliary data and using artificial visual interpretation, to determine the area of changes and the category of change objects. According to the unified change information extraction principle to extract change areas. Finally, the change detection results are compared with 3 m resolution TerraSAR-X data and 2 m resolution multi-spectral image, and the accuracy is evaluated. Experimental results show that the accuracy of the GF-3 data is over 75 % in detecting the change of ground objects, and the detection capability of new filling soil is better than that of TerraSAR-X data, verify the detection and monitoring capability of GF-3 data to the change information extraction, also, it shows that GF-3 can provide effective data support for the remote sensing detection of land resources.
Shadow detection of moving objects based on multisource information in Internet of things
NASA Astrophysics Data System (ADS)
Ma, Zhen; Zhang, De-gan; Chen, Jie; Hou, Yue-xian
2017-05-01
Moving object detection is an important part in intelligent video surveillance under the banner of Internet of things. The detection of moving target's shadow is also an important step in moving object detection. On the accuracy of shadow detection will affect the detection results of the object directly. Based on the variety of shadow detection method, we find that only using one feature can't make the result of detection accurately. Then we present a new method for shadow detection which contains colour information, the invariance of optical and texture feature. Through the comprehensive analysis of the detecting results of three kinds of information, the shadow was effectively determined. It gets ideal effect in the experiment when combining advantages of various methods.
NASA Astrophysics Data System (ADS)
Fujiki, Shogoro; Okada, Kei-ichi; Nishio, Shogo; Kitayama, Kanehiro
2016-09-01
We developed a new method to estimate stand ages of secondary vegetation in the Bornean montane zone, where local people conduct traditional shifting cultivation and protected areas are surrounded by patches of recovering secondary vegetation of various ages. Identifying stand ages at the landscape level is critical to improve conservation policies. We combined a high-resolution satellite image (WorldView-2) with time-series Landsat images. We extracted stand ages (the time elapsed since the most recent slash and burn) from a change-detection analysis with Landsat time-series images and superimposed the derived stand ages on the segments classified by object-based image analysis using WorldView-2. We regarded stand ages as a response variable, and object-based metrics as independent variables, to develop regression models that explain stand ages. Subsequently, we classified the vegetation of the target area into six age units and one rubber plantation unit (1-3 yr, 3-5 yr, 5-7 yr, 7-30 yr, 30-50 yr, >50 yr and 'rubber plantation') using regression models and linear discriminant analyses. Validation demonstrated an accuracy of 84.3%. Our approach is particularly effective in classifying highly dynamic pioneer vegetation younger than 7 years into 2-yr intervals, suggesting that rapid changes in vegetation canopies can be detected with high accuracy. The combination of a spectral time-series analysis and object-based metrics based on high-resolution imagery enabled the classification of dynamic vegetation under intensive shifting cultivation and yielded an informative land cover map based on stand ages.
Estimated capacity of object files in visual short-term memory is not improved by retrieval cueing.
Saiki, Jun; Miyatsuji, Hirofumi
2009-03-23
Visual short-term memory (VSTM) has been claimed to maintain three to five feature-bound object representations. Some results showing smaller capacity estimates for feature binding memory have been interpreted as the effects of interference in memory retrieval. However, change-detection tasks may not properly evaluate complex feature-bound representations such as triple conjunctions in VSTM. To understand the general type of feature-bound object representation, evaluation of triple conjunctions is critical. To test whether interference occurs in memory retrieval for complete object file representations in a VSTM task, we cued retrieval in novel paradigms that directly evaluate the memory for triple conjunctions, in comparison with a simple change-detection task. In our multiple object permanence tracking displays, observers monitored for a switch in feature combination between objects during an occlusion period, and we found that a retrieval cue provided no benefit with the triple conjunction tasks, but significant facilitation with the change-detection task, suggesting that low capacity estimates of object file memory in VSTM reflect a limit on maintenance, not retrieval.
Concept for maritime near-surface surveillance using water Raman scattering
Shokair, Isaac R.; Johnson, Mark S.; Schmitt, Randal L.; ...
2018-06-08
Here, we discuss a maritime surveillance and detection concept based on Raman scattering of water molecules. Using a range-gated scanning lidar that detects Raman scattered photons from water, the absence or change of signal indicates the presence of a non-water object. With sufficient spatial resolution, a two-dimensional outline of the object can be generated by the scanning lidar. Because Raman scattering is an inelastic process with a relatively large wavelength shift for water, this concept avoids the often problematic elastic scattering for objects at or very close to the water surface or from the bottom surface for shallow waters. Themore » maximum detection depth for this concept is limited by the attenuation of the excitation and return Raman light in water. If excitation in the UV is used, fluorescence can be used for discrimination between organic and non-organic objects. In this paper, we present a lidar model for this concept and discuss results of proof-of-concept measurements. Using published cross section values, the model and measurements are in reasonable agreement and show that a sufficient number of Raman photons can be generated for modest lidar parameters to make this concept useful for near-surface detection.« less
Concept for maritime near-surface surveillance using water Raman scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shokair, Isaac R.; Johnson, Mark S.; Schmitt, Randal L.
Here, we discuss a maritime surveillance and detection concept based on Raman scattering of water molecules. Using a range-gated scanning lidar that detects Raman scattered photons from water, the absence or change of signal indicates the presence of a non-water object. With sufficient spatial resolution, a two-dimensional outline of the object can be generated by the scanning lidar. Because Raman scattering is an inelastic process with a relatively large wavelength shift for water, this concept avoids the often problematic elastic scattering for objects at or very close to the water surface or from the bottom surface for shallow waters. Themore » maximum detection depth for this concept is limited by the attenuation of the excitation and return Raman light in water. If excitation in the UV is used, fluorescence can be used for discrimination between organic and non-organic objects. In this paper, we present a lidar model for this concept and discuss results of proof-of-concept measurements. Using published cross section values, the model and measurements are in reasonable agreement and show that a sufficient number of Raman photons can be generated for modest lidar parameters to make this concept useful for near-surface detection.« less
Using Machine Learning for Advanced Anomaly Detection and Classification
NASA Astrophysics Data System (ADS)
Lane, B.; Poole, M.; Camp, M.; Murray-Krezan, J.
2016-09-01
Machine Learning (ML) techniques have successfully been used in a wide variety of applications to automatically detect and potentially classify changes in activity, or a series of activities by utilizing large amounts data, sometimes even seemingly-unrelated data. The amount of data being collected, processed, and stored in the Space Situational Awareness (SSA) domain has grown at an exponential rate and is now better suited for ML. This paper describes development of advanced algorithms to deliver significant improvements in characterization of deep space objects and indication and warning (I&W) using a global network of telescopes that are collecting photometric data on a multitude of space-based objects. The Phase II Air Force Research Laboratory (AFRL) Small Business Innovative Research (SBIR) project Autonomous Characterization Algorithms for Change Detection and Characterization (ACDC), contracted to ExoAnalytic Solutions Inc. is providing the ability to detect and identify photometric signature changes due to potential space object changes (e.g. stability, tumble rate, aspect ratio), and correlate observed changes to potential behavioral changes using a variety of techniques, including supervised learning. Furthermore, these algorithms run in real-time on data being collected and processed by the ExoAnalytic Space Operations Center (EspOC), providing timely alerts and warnings while dynamically creating collection requirements to the EspOC for the algorithms that generate higher fidelity I&W. This paper will discuss the recently implemented ACDC algorithms, including the general design approach and results to date. The usage of supervised algorithms, such as Support Vector Machines, Neural Networks, k-Nearest Neighbors, etc., and unsupervised algorithms, for example k-means, Principle Component Analysis, Hierarchical Clustering, etc., and the implementations of these algorithms is explored. Results of applying these algorithms to EspOC data both in an off-line "pattern of life" analysis as well as using the algorithms on-line in real-time, meaning as data is collected, will be presented. Finally, future work in applying ML for SSA will be discussed.
Cortical mechanisms for the segregation and representation of acoustic textures.
Overath, Tobias; Kumar, Sukhbinder; Stewart, Lauren; von Kriegstein, Katharina; Cusack, Rhodri; Rees, Adrian; Griffiths, Timothy D
2010-02-10
Auditory object analysis requires two fundamental perceptual processes: the definition of the boundaries between objects, and the abstraction and maintenance of an object's characteristic features. Although it is intuitive to assume that the detection of the discontinuities at an object's boundaries precedes the subsequent precise representation of the object, the specific underlying cortical mechanisms for segregating and representing auditory objects within the auditory scene are unknown. We investigated the cortical bases of these two processes for one type of auditory object, an "acoustic texture," composed of multiple frequency-modulated ramps. In these stimuli, we independently manipulated the statistical rules governing (1) the frequency-time space within individual textures (comprising ramps with a given spectrotemporal coherence) and (2) the boundaries between textures (adjacent textures with different spectrotemporal coherences). Using functional magnetic resonance imaging, we show mechanisms defining boundaries between textures with different coherences in primary and association auditory cortices, whereas texture coherence is represented only in association cortex. Furthermore, participants' superior detection of boundaries across which texture coherence increased (as opposed to decreased) was reflected in a greater neural response in auditory association cortex at these boundaries. The results suggest a hierarchical mechanism for processing acoustic textures that is relevant to auditory object analysis: boundaries between objects are first detected as a change in statistical rules over frequency-time space, before a representation that corresponds to the characteristics of the perceived object is formed.
Wavelet-based polarimetry analysis
NASA Astrophysics Data System (ADS)
Ezekiel, Soundararajan; Harrity, Kyle; Farag, Waleed; Alford, Mark; Ferris, David; Blasch, Erik
2014-06-01
Wavelet transformation has become a cutting edge and promising approach in the field of image and signal processing. A wavelet is a waveform of effectively limited duration that has an average value of zero. Wavelet analysis is done by breaking up the signal into shifted and scaled versions of the original signal. The key advantage of a wavelet is that it is capable of revealing smaller changes, trends, and breakdown points that are not revealed by other techniques such as Fourier analysis. The phenomenon of polarization has been studied for quite some time and is a very useful tool for target detection and tracking. Long Wave Infrared (LWIR) polarization is beneficial for detecting camouflaged objects and is a useful approach when identifying and distinguishing manmade objects from natural clutter. In addition, the Stokes Polarization Parameters, which are calculated from 0°, 45°, 90°, 135° right circular, and left circular intensity measurements, provide spatial orientations of target features and suppress natural features. In this paper, we propose a wavelet-based polarimetry analysis (WPA) method to analyze Long Wave Infrared Polarimetry Imagery to discriminate targets such as dismounts and vehicles from background clutter. These parameters can be used for image thresholding and segmentation. Experimental results show the wavelet-based polarimetry analysis is efficient and can be used in a wide range of applications such as change detection, shape extraction, target recognition, and feature-aided tracking.
Kikuchi, Yukiko; Senju, Atsushi; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu
2009-01-01
Two experiments investigated attention of children with autism spectrum disorder (ASD) to faces and objects. In both experiments, children (7- to 15-year-olds) detected the difference between 2 visual scenes. Results in Experiment 1 revealed that typically developing children (n = 16) detected the change in faces faster than in objects, whereas children with ASD (n = 16) were equally fast in detecting changes in faces and objects. These results were replicated in Experiment 2 (n = 16 in children with ASD and 22 in typically developing children), which does not require face recognition skill. Results suggest that children with ASD lack an attentional bias toward others' faces, which could contribute to their atypical social orienting.
a New Object-Based Framework to Detect Shodows in High-Resolution Satellite Imagery Over Urban Areas
NASA Astrophysics Data System (ADS)
Tatar, N.; Saadatseresht, M.; Arefi, H.; Hadavand, A.
2015-12-01
In this paper a new object-based framework to detect shadow areas in high resolution satellite images is proposed. To produce shadow map in pixel level state of the art supervised machine learning algorithms are employed. Automatic ground truth generation based on Otsu thresholding on shadow and non-shadow indices is used to train the classifiers. It is followed by segmenting the image scene and create image objects. To detect shadow objects, a majority voting on pixel-based shadow detection result is designed. GeoEye-1 multi-spectral image over an urban area in Qom city of Iran is used in the experiments. Results shows the superiority of our proposed method over traditional pixel-based, visually and quantitatively.
Transmission mode terahertz computed tomography
Ferguson, Bradley Stuart; Wang, Shaohong; Zhang, Xi-Cheng
2006-10-10
A method of obtaining a series of images of a three-dimensional object by transmitting pulsed terahertz (THz) radiation through the entire object from a plurality of angles, optically detecting changes in the transmitted THz radiation using pulsed laser radiation, and constructing a plurality of imaged slices of the three-dimensional object using the detected changes in the transmitted THz radiation. The THz radiation is transmitted through the object as a scanning spot. The object is placed within the Rayleigh range of the focused THz beam and a focusing system is used to transfer the imaging plane from adjacent the object to a desired distance away from the object. A related system is also disclosed.
ERIC Educational Resources Information Center
Flombaum, Jonathan I.; Scholl, Brian J.
2006-01-01
Meaningful visual experience requires computations that identify objects as the same persisting individuals over time, motion, occlusion, and featural change. This article explores these computations in the tunnel effect: When an object moves behind an occluder, and then an object later emerges following a consistent trajectory, observers…
Change Detection: Training and Transfer
Gaspar, John G.; Neider, Mark B.; Simons, Daniel J.; McCarley, Jason S.; Kramer, Arthur F.
2013-01-01
Observers often fail to notice even dramatic changes to their environment, a phenomenon known as change blindness. If training could enhance change detection performance in general, then it might help to remedy some real-world consequences of change blindness (e.g. failing to detect hazards while driving). We examined whether adaptive training on a simple change detection task could improve the ability to detect changes in untrained tasks for young and older adults. Consistent with an effective training procedure, both young and older adults were better able to detect changes to trained objects following training. However, neither group showed differential improvement on untrained change detection tasks when compared to active control groups. Change detection training led to improvements on the trained task but did not generalize to other change detection tasks. PMID:23840775
Satellite change detection of forest damage near the Chernobyl accident
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClellan, G.E.; Anno, G.H.
1992-01-01
A substantial amount of forest within a few kilometers of the Chernobyl nuclear reactor station was badly contaminated with radionuclides by the April 26, 1986, explosion and ensuing fire at reactor No. 4. Radiation doses to conifers in some areas were sufficient to cause discoloration of needles within a few weeks. Other areas, receiving smaller doses, showed foliage changes beginning 6 months to a year later. Multispectral imagery available from Landsat sensors is especially suited for monitoring such changes in vegetation. A series of Landsat Thematic Mapper images was developed that span the 2 yr following the accident. Quantitative dosemore » estimation for the exposed conifers requires an objective change detection algorithm and knowledge of the dose-time response of conifers to ionizing radiation. Pacific-Sierra Research Corporation's Hyperscout{trademark} algorithm is based on an advanced, sensitive technique for change detection particularly suited for multispectral images. The Hyperscout algorithm has been used to assess radiation damage to the forested areas around the Chernobyl nuclear power plant.« less
Construction of a Polyaniline Nanofiber Gas Sensor
ERIC Educational Resources Information Center
Virji, Shabnam; Weiller, Bruce H.; Huang, Jiaxing; Blair, Richard; Shepherd, Heather; Faltens, Tanya; Haussmann, Philip C.; Kaner, Richard B.; Tolbert, Sarah H.
2008-01-01
The electrical properties of polyaniline changes by orders of magnitude upon exposure to analytes such as acids or bases, making it a useful material for detection of these analytes in the gas phase. The objectives of this lab are to synthesize different diameter polyaniline nanofibers and compare them as sensor materials. In this experiment…
NASA Technical Reports Server (NTRS)
Hollier, Andi B.; Jagge, Amy M.; Stefanov, William L.; Vanderbloemen, Lisa A.
2017-01-01
For over fifty years, NASA astronauts have taken exceptional photographs of the Earth from the unique vantage point of low Earth orbit (as well as from lunar orbit and surface of the Moon). The Crew Earth Observations (CEO) Facility is the NASA ISS payload supporting astronaut photography of the Earth surface and atmosphere. From aurora to mountain ranges, deltas, and cities, there are over two million images of the Earth's surface dating back to the Mercury missions in the early 1960s. The Gateway to Astronaut Photography of Earth website (eol.jsc.nasa.gov) provides a publically accessible platform to query and download these images at a variety of spatial resolutions and perform scientific research at no cost to the end user. As a demonstration to the science, application, and education user communities we examine astronaut photography of the Washington D.C. metropolitan area for three time steps between 1998 and 2016 using Geographic Object-Based Image Analysis (GEOBIA) to classify and quantify land cover/land use and provide a template for future change detection studies with astronaut photography.
Moving object detection using dynamic motion modelling from UAV aerial images.
Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid
2014-01-01
Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.
Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.
2008-01-01
Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. PMID:27879757
Flow detection via sparse frame analysis for suspicious event recognition in infrared imagery
NASA Astrophysics Data System (ADS)
Fernandes, Henrique C.; Batista, Marcos A.; Barcelos, Celia A. Z.; Maldague, Xavier P. V.
2013-05-01
It is becoming increasingly evident that intelligent systems are very bene¯cial for society and that the further development of such systems is necessary to continue to improve society's quality of life. One area that has drawn the attention of recent research is the development of automatic surveillance systems. In our work we outline a system capable of monitoring an uncontrolled area (an outside parking lot) using infrared imagery and recognizing suspicious events in this area. The ¯rst step is to identify moving objects and segment them from the scene's background. Our approach is based on a dynamic background-subtraction technique which robustly adapts detection to illumination changes. It is analyzed only regions where movement is occurring, ignoring in°uence of pixels from regions where there is no movement, to segment moving objects. Regions where movement is occurring are identi¯ed using °ow detection via sparse frame analysis. During the tracking process the objects are classi¯ed into two categories: Persons and Vehicles, based on features such as size and velocity. The last step is to recognize suspicious events that may occur in the scene. Since the objects are correctly segmented and classi¯ed it is possible to identify those events using features such as velocity and time spent motionless in one spot. In this paper we recognize the suspicious event suspicion of object(s) theft from inside a parked vehicle at spot X by a person" and results show that the use of °ow detection increases the recognition of this suspicious event from 78:57% to 92:85%.
Laser-based structural sensing and surface damage detection
NASA Astrophysics Data System (ADS)
Guldur, Burcu
Damage due to age or accumulated damage from hazards on existing structures poses a worldwide problem. In order to evaluate the current status of aging, deteriorating and damaged structures, it is vital to accurately assess the present conditions. It is possible to capture the in situ condition of structures by using laser scanners that create dense three-dimensional point clouds. This research investigates the use of high resolution three-dimensional terrestrial laser scanners with image capturing abilities as tools to capture geometric range data of complex scenes for structural engineering applications. Laser scanning technology is continuously improving, with commonly available scanners now capturing over 1,000,000 texture-mapped points per second with an accuracy of ~2 mm. However, automatically extracting meaningful information from point clouds remains a challenge, and the current state-of-the-art requires significant user interaction. The first objective of this research is to use widely accepted point cloud processing steps such as registration, feature extraction, segmentation, surface fitting and object detection to divide laser scanner data into meaningful object clusters and then apply several damage detection methods to these clusters. This required establishing a process for extracting important information from raw laser-scanned data sets such as the location, orientation and size of objects in a scanned region, and location of damaged regions on a structure. For this purpose, first a methodology for processing range data to identify objects in a scene is presented and then, once the objects from model library are correctly detected and fitted into the captured point cloud, these fitted objects are compared with the as-is point cloud of the investigated object to locate defects on the structure. The algorithms are demonstrated on synthetic scenes and validated on range data collected from test specimens and test-bed bridges. The second objective of this research is to combine useful information extracted from laser scanner data with color information, which provides information in the fourth dimension that enables detection of damage types such as cracks, corrosion, and related surface defects that are generally difficult to detect using only laser scanner data; moreover, the color information also helps to track volumetric changes on structures such as spalling. Although using images with varying resolution to detect cracks is an extensively researched topic, damage detection using laser scanners with and without color images is a new research area that holds many opportunities for enhancing the current practice of visual inspections. The aim is to combine the best features of laser scans and images to create an automatic and effective surface damage detection method, which will reduce the need for skilled labor during visual inspections and allow automatic documentation of related information. This work enables developing surface damage detection strategies that integrate existing condition rating criteria for a wide range damage types that are collected under three main categories: small deformations already existing on the structure (cracks); damage types that induce larger deformations, but where the initial topology of the structure has not changed appreciably (e.g., bent members); and large deformations where localized changes in the topology of the structure have occurred (e.g., rupture, discontinuities and spalling). The effectiveness of the developed damage detection algorithms are validated by comparing the detection results with the measurements taken from test specimens and test-bed bridges.
2015-12-15
Keypoint Density-based Region Proposal for Fine-Grained Object Detection and Classification using Regions with Convolutional Neural Network ... Convolutional Neural Networks (CNNs) enable them to outperform conventional techniques on standard object detection and classification tasks, their...detection accuracy and speed on the fine-grained Caltech UCSD bird dataset (Wah et al., 2011). Recently, Convolutional Neural Networks (CNNs), a deep
NASA Astrophysics Data System (ADS)
Krauß, T.
2014-11-01
The focal plane assembly of most pushbroom scanner satellites is built up in a way that different multispectral or multispectral and panchromatic bands are not all acquired exactly at the same time. This effect is due to offsets of some millimeters of the CCD-lines in the focal plane. Exploiting this special configuration allows the detection of objects moving during this small time span. In this paper we present a method for automatic detection and extraction of moving objects - mainly traffic - from single very high resolution optical satellite imagery of different sensors. The sensors investigated are WorldView-2, RapidEye, Pléiades and also the new SkyBox satellites. Different sensors require different approaches for detecting moving objects. Since the objects are mapped on different positions only in different spectral bands also the change of spectral properties have to be taken into account. In case the main distance in the focal plane is between the multispectral and the panchromatic CCD-line like for Pléiades an approach for weighted integration to receive mostly identical images is investigated. Other approaches for RapidEye and WorldView-2 are also shown. From these intermediate bands difference images are calculated and a method for detecting the moving objects from these difference images is proposed. Based on these presented methods images from different sensors are processed and the results are assessed for detection quality - how many moving objects can be detected, how many are missed - and accuracy - how accurate is the derived speed and size of the objects. Finally the results are discussed and an outlook for possible improvements towards operational processing is presented.
SuBSENSE: a universal change detection method with local adaptive sensitivity.
St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert
2015-01-01
Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.
Shifting attention in viewer- and object-based reference frames after unilateral brain injury.
List, Alexandra; Landau, Ayelet N; Brooks, Joseph L; Flevaris, Anastasia V; Fortenbaugh, Francesca C; Esterman, Michael; Van Vleet, Thomas M; Albrecht, Alice R; Alvarez, Bryan D; Robertson, Lynn C; Schendel, Krista
2011-06-01
The aims of the present study were to investigate the respective roles that object- and viewer-based reference frames play in reorienting visual attention, and to assess their influence after unilateral brain injury. To do so, we studied 16 right hemisphere injured (RHI) and 13 left hemisphere injured (LHI) patients. We used a cueing design that manipulates the location of cues and targets relative to a display comprised of two rectangles (i.e., objects). Unlike previous studies with patients, we presented all cues at midline rather than in the left or right visual fields. Thus, in the critical conditions in which targets were presented laterally, reorienting of attention was always from a midline cue. Performance was measured for lateralized target detection as a function of viewer-based (contra- and ipsilesional sides) and object-based (requiring reorienting within or between objects) reference frames. As expected, contralesional detection was slower than ipsilesional detection for the patients. More importantly, objects influenced target detection differently in the contralesional and ipsilesional fields. Contralesionally, reorienting to a target within the cued object took longer than reorienting to a target in the same location but in the uncued object. This finding is consistent with object-based neglect. Ipsilesionally, the means were in the opposite direction. Furthermore, no significant difference was found in object-based influences between the patient groups (RHI vs. LHI). These findings are discussed in the context of reference frames used in reorienting attention for target detection. Published by Elsevier Ltd.
Nonlocally sensing the magnetic states of nanoscale antiferromagnets with an atomic spin sensor
Yan, Shichao; Malavolti, Luigi; Burgess, Jacob A. J.; Droghetti, Andrea; Rubio, Angel; Loth, Sebastian
2017-01-01
The ability to sense the magnetic state of individual magnetic nano-objects is a key capability for powerful applications ranging from readout of ultradense magnetic memory to the measurement of spins in complex structures with nanometer precision. Magnetic nano-objects require extremely sensitive sensors and detection methods. We create an atomic spin sensor consisting of three Fe atoms and show that it can detect nanoscale antiferromagnets through minute, surface-mediated magnetic interaction. Coupling, even to an object with no net spin and having vanishing dipolar stray field, modifies the transition matrix element between two spin states of the Fe atom–based spin sensor that changes the sensor’s spin relaxation time. The sensor can detect nanoscale antiferromagnets at up to a 3-nm distance and achieves an energy resolution of 10 μeV, surpassing the thermal limit of conventional scanning probe spectroscopy. This scheme permits simultaneous sensing of multiple antiferromagnets with a single-spin sensor integrated onto the surface. PMID:28560346
Monitoring of changes in areas of conflicts: the example of Darfur
NASA Astrophysics Data System (ADS)
Thunig, H.; Michel, U.
2012-10-01
Rapid change detection is used in cases of natural hazards and disasters. This analysis leads to rapid information on areas of damage. In certain cases the lack of information after catastrophe events is obstructing supporting measures within disaster management. Earthquakes, tsunamis, civil war, volcanic eruption, droughts and floods have much in common: people are directly affected, landscapes and buildings are destroyed. In every case geospatial data is necessary to gain knowledge as basement for decision support. Where to go first? Which infrastructure is usable? How much area is affected? These are essential question which need to be answered before appropriate, eligible help can be established. This paper focuses on change detection applications in areas where catastrophic events took place which resulted in rapid destruction especially of manmade objects. Standard methods for automated change detection prove not to be sufficient; therefore a new method was developed and tested. The presented method allows a fast detection and visualization of change in areas of crisis or catastrophes. While often new methods of remote sensing are developed without user oriented aspects, organizations and authorities are not able to use these methods because of lack of remote sensing knowledge. Therefore a semi-automated procedure was developed. Within a transferable framework, the developed algorithm can be implemented for a set of remote sensing data among different investigation areas. Several case studies are the base for the retrieved results. Within a coarse dividing into statistical parts and the segmentation in meaningful objects, the framework is able to deal with different types of change. By means of an elaborated Temporal Change Index (TCI) only panchromatic datasets are used to extract areas which are destroyed, areas which were not affected and in addition areas where rebuilding has already started.
Detection of greenhouse-gas-induced climatic change. Progress report, July 1, 1994--July 31, 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, P.D.; Wigley, T.M.L.
1995-07-21
The objective of this research is to assembly and analyze instrumental climate data and to develop and apply climate models as a basis for detecting greenhouse-gas-induced climatic change, and validation of General Circulation Models. In addition to changes due to variations in anthropogenic forcing, including greenhouse gas and aerosol concentration changes, the global climate system exhibits a high degree of internally-generated and externally-forced natural variability. To detect the anthropogenic effect, its signal must be isolated from the ``noise`` of this natural climatic variability. A high quality, spatially extensive data base is required to define the noise and its spatial characteristics.more » To facilitate this, available land and marine data bases will be updated and expanded. The data will be analyzed to determine the potential effects on climate of greenhouse gas and aerosol concentration changes and other factors. Analyses will be guided by a variety of models, from simple energy balance climate models to coupled atmosphere ocean General Circulation Models. These analyses are oriented towards obtaining early evidence of anthropogenic climatic change that would lead either to confirmation, rejection or modification of model projections, and towards the statistical validation of General Circulation Model control runs and perturbation experiments.« less
NASA Astrophysics Data System (ADS)
Sánchez, Clara I.; Hornero, Roberto; Mayo, Agustín; García, María
2009-02-01
Diabetic Retinopathy is one of the leading causes of blindness and vision defects in developed countries. An early detection and diagnosis is crucial to avoid visual complication. Microaneurysms are the first ocular signs of the presence of this ocular disease. Their detection is of paramount importance for the development of a computer-aided diagnosis technique which permits a prompt diagnosis of the disease. However, the detection of microaneurysms in retinal images is a difficult task due to the wide variability that these images usually present in screening programs. We propose a statistical approach based on mixture model-based clustering and logistic regression which is robust to the changes in the appearance of retinal fundus images. The method is evaluated on the public database proposed by the Retinal Online Challenge in order to obtain an objective performance measure and to allow a comparative study with other proposed algorithms.
ERIC Educational Resources Information Center
Kikuchi, Yukiko; Senju, Atsushi; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu
2009-01-01
Two experiments investigated attention of children with autism spectrum disorder (ASD) to faces and objects. In both experiments, children (7- to 15-year-olds) detected the difference between 2 visual scenes. Results in Experiment 1 revealed that typically developing children (n = 16) detected the change in faces faster than in objects, whereas…
Microseismic techniques for avoiding induced seismicity during fluid injection
Matzel, Eric; White, Joshua; Templeton, Dennise; ...
2014-01-01
The goal of this research is to develop a fundamentally better approach to geological site characterization and early hazard detection. We combine innovative techniques for analyzing microseismic data with a physics-based inversion model to forecast microseismic cloud evolution. The key challenge is that faults at risk of slipping are often too small to detect during the site characterization phase. Our objective is to devise fast-running methodologies that will allow field operators to respond quickly to changing subsurface conditions.
Gosch, D; Ratzmer, A; Berauer, P; Kahn, T
2007-09-01
The objective of this study was to examine the extent to which the image quality on mobile C-arms can be improved by an innovative exposure rate control system (grid control). In addition, the possible dose reduction in the pulsed fluoroscopy mode using 25 pulses/sec produced by automatic adjustment of the pulse rate through motion detection was to be determined. As opposed to conventional exposure rate control systems, which use a measuring circle in the center of the field of view, grid control is based on a fine mesh of square cells which are overlaid on the entire fluoroscopic image. The system uses only those cells for exposure control that are covered by the object to be visualized. This is intended to ensure optimally exposed images, regardless of the size, shape and position of the object to be visualized. The system also automatically detects any motion of the object. If a pulse rate of 25 pulses/sec is selected and no changes in the image are observed, the pulse rate used for pulsed fluoroscopy is gradually reduced. This may decrease the radiation exposure. The influence of grid control on image quality was examined using an anthropomorphic phantom. The dose reduction achieved with the help of object detection was determined by evaluating the examination data of 146 patients from 5 different countries. The image of the static phantom made with grid control was always optimally exposed, regardless of the position of the object to be visualized. The average dose reduction when using 25 pulses/sec resulting from object detection and automatic down-pulsing was 21 %, and the maximum dose reduction was 60 %. Grid control facilitates C-arm operation, since optimum image exposure can be obtained independently of object positioning. Object detection may lead to a reduction in radiation exposure for the patient and operating staff.
Fast and Robust Segmentation and Classification for Change Detection in Urban Point Clouds
NASA Astrophysics Data System (ADS)
Roynard, X.; Deschaud, J.-E.; Goulette, F.
2016-06-01
Change detection is an important issue in city monitoring to analyse street furniture, road works, car parking, etc. For example, parking surveys are needed but are currently a laborious task involving sending operators in the streets to identify the changes in car locations. In this paper, we propose a method that performs a fast and robust segmentation and classification of urban point clouds, that can be used for change detection. We apply this method to detect the cars, as a particular object class, in order to perform parking surveys automatically. A recently proposed method already addresses the need for fast segmentation and classification of urban point clouds, using elevation images. The interest to work on images is that processing is much faster, proven and robust. However there may be a loss of information in complex 3D cases: for example when objects are one above the other, typically a car under a tree or a pedestrian under a balcony. In this paper we propose a method that retain the three-dimensional information while preserving fast computation times and improving segmentation and classification accuracy. It is based on fast region-growing using an octree, for the segmentation, and specific descriptors with Random-Forest for the classification. Experiments have been performed on large urban point clouds acquired by Mobile Laser Scanning. They show that the method is as fast as the state of the art, and that it gives more robust results in the complex 3D cases.
Capnography as a tool to detect metabolic changes in patients cared for in the emergency setting
Cereceda-Sánchez, Francisco José; Molina-Mula, Jesús
2017-01-01
ABSTRACT Objective: to evaluate the usefulness of capnography for the detection of metabolic changes in spontaneous breathing patients, in the emergency and intensive care settings. Methods: in-depth and structured bibliographical search in the databases EBSCOhost, Virtual Health Library, PubMed, Cochrane Library, among others, identifying studies that assessed the relationship between capnography values and the variables involved in blood acid-base balance. Results: 19 studies were found, two were reviews and 17 were observational studies. In nine studies, capnography values were correlated with carbon dioxide (CO2), eight with bicarbonate (HCO3), three with lactate, and four with blood pH. Conclusions: most studies have found a good correlation between capnography values and blood biomarkers, suggesting the usefulness of this parameter to detect patients at risk of severe metabolic change, in a fast, economical and accurate way. PMID:28513767
Updating National Topographic Data Base Using Change Detection Methods
NASA Astrophysics Data System (ADS)
Keinan, E.; Felus, Y. A.; Tal, Y.; Zilberstien, O.; Elihai, Y.
2016-06-01
The traditional method for updating a topographic database on a national scale is a complex process that requires human resources, time and the development of specialized procedures. In many National Mapping and Cadaster Agencies (NMCA), the updating cycle takes a few years. Today, the reality is dynamic and the changes occur every day, therefore, the users expect that the existing database will portray the current reality. Global mapping projects which are based on community volunteers, such as OSM, update their database every day based on crowdsourcing. In order to fulfil user's requirements for rapid updating, a new methodology that maps major interest areas while preserving associated decoding information, should be developed. Until recently, automated processes did not yield satisfactory results, and a typically process included comparing images from different periods. The success rates in identifying the objects were low, and most were accompanied by a high percentage of false alarms. As a result, the automatic process required significant editorial work that made it uneconomical. In the recent years, the development of technologies in mapping, advancement in image processing algorithms and computer vision, together with the development of digital aerial cameras with NIR band and Very High Resolution satellites, allow the implementation of a cost effective automated process. The automatic process is based on high-resolution Digital Surface Model analysis, Multi Spectral (MS) classification, MS segmentation, object analysis and shape forming algorithms. This article reviews the results of a novel change detection methodology as a first step for updating NTDB in the Survey of Israel.
Hardware Evaluation Of Heavy Truck Side And Rear Object Detection Systems
DOT National Transportation Integrated Search
1995-01-01
This paper focuses on two types of electronics-based object detection systems for heavy truck applications: those sensing the presence of objects to the rear of the vehicle (referred to as Rear Object Detection Systems, or RODS) and those sensing the...
Object detection from images obtained through underwater turbulence medium
NASA Astrophysics Data System (ADS)
Furhad, Md. Hasan; Tahtali, Murat; Lambert, Andrew
2017-09-01
Imaging through underwater experiences severe distortions due to random fluctuations of temperature and salinity in water, which produces underwater turbulence through diffraction limited blur. Lights reflecting from objects perturb and attenuate contrast, making the recognition of objects of interest difficult. Thus, the information available for detecting underwater objects of interest becomes a challenging task as they have inherent confusion among the background, foreground and other image properties. In this paper, a saliency-based approach is proposed to detect the objects acquired through an underwater turbulent medium. This approach has drawn attention among a wide range of computer vision applications, such as image retrieval, artificial intelligence, neuro-imaging and object detection. The image is first processed through a deblurring filter. Next, a saliency technique is used on the image for object detection. In this step, a saliency map that highlights the target regions is generated and then a graph-based model is proposed to extract these target regions for object detection.
P.S. Homann; B.T. Bormann; J.R. Boyle; R.L. Darbyshire; R. Bigley
2008-01-01
Detecting changes in forest soil C and N is vital to the study of global budgets and long-term ecosystem productivity. Identifying differences among land-use practices may guide future management. Our objective was to determine the relation of minimum detectable changes (MDCs) and minimum detectable differences between treatments (MDDs) to soil C and N variability at...
Low-complexity object detection with deep convolutional neural network for embedded systems
NASA Astrophysics Data System (ADS)
Tripathi, Subarna; Kang, Byeongkeun; Dane, Gokce; Nguyen, Truong
2017-09-01
We investigate low-complexity convolutional neural networks (CNNs) for object detection for embedded vision applications. It is well-known that consolidation of an embedded system for CNN-based object detection is more challenging due to computation and memory requirement comparing with problems like image classification. To achieve these requirements, we design and develop an end-to-end TensorFlow (TF)-based fully-convolutional deep neural network for generic object detection task inspired by one of the fastest framework, YOLO.1 The proposed network predicts the localization of every object by regressing the coordinates of the corresponding bounding box as in YOLO. Hence, the network is able to detect any objects without any limitations in the size of the objects. However, unlike YOLO, all the layers in the proposed network is fully-convolutional. Thus, it is able to take input images of any size. We pick face detection as an use case. We evaluate the proposed model for face detection on FDDB dataset and Widerface dataset. As another use case of generic object detection, we evaluate its performance on PASCAL VOC dataset. The experimental results demonstrate that the proposed network can predict object instances of different sizes and poses in a single frame. Moreover, the results show that the proposed method achieves comparative accuracy comparing with the state-of-the-art CNN-based object detection methods while reducing the model size by 3× and memory-BW by 3 - 4× comparing with one of the best real-time CNN-based object detectors, YOLO. Our 8-bit fixed-point TF-model provides additional 4× memory reduction while keeping the accuracy nearly as good as the floating-point model. Moreover, the fixed- point model is capable of achieving 20× faster inference speed comparing with the floating-point model. Thus, the proposed method is promising for embedded implementations.
Vater, Christian; Kredel, Ralf; Hossner, Ernst-Joachim
2017-05-01
In the current study, dual-task performance is examined with multiple-object tracking as a primary task and target-change detection as a secondary task. The to-be-detected target changes in conditions of either change type (form vs. motion; Experiment 1) or change salience (stop vs. slowdown; Experiment 2), with changes occurring at either near (5°-10°) or far (15°-20°) eccentricities (Experiments 1 and 2). The aim of the study was to test whether changes can be detected solely with peripheral vision. By controlling for saccades and computing gaze distances, we could show that participants used peripheral vision to monitor the targets and, additionally, to perceive changes at both near and far eccentricities. Noticeably, gaze behavior was not affected by the actual target change. Detection rates as well as response times generally varied as a function of change condition and eccentricity, with faster detections for motion changes and near changes. However, in contrast to the effects found for motion changes, sharp declines in detection rates and increased response times were observed for form changes as a function of the eccentricities. This result can be ascribed to properties of the visual system, namely to the limited spatial acuity in the periphery and the comparably receptive motion sensitivity of peripheral vision. These findings show that peripheral vision is functional for simultaneous target monitoring and target-change detection as saccadic information suppression can be avoided and covert attention can be optimally distributed to all targets. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Detecting Heap-Spraying Code Injection Attacks in Malicious Web Pages Using Runtime Execution
NASA Astrophysics Data System (ADS)
Choi, Younghan; Kim, Hyoungchun; Lee, Donghoon
The growing use of web services is increasing web browser attacks exponentially. Most attacks use a technique called heap spraying because of its high success rate. Heap spraying executes a malicious code without indicating the exact address of the code by copying it into many heap objects. For this reason, the attack has a high potential to succeed if only the vulnerability is exploited. Thus, attackers have recently begun using this technique because it is easy to use JavaScript to allocate the heap memory area. This paper proposes a novel technique that detects heap spraying attacks by executing a heap object in a real environment, irrespective of the version and patch status of the web browser. This runtime execution is used to detect various forms of heap spraying attacks, such as encoding and polymorphism. Heap objects are executed after being filtered on the basis of patterns of heap spraying attacks in order to reduce the overhead of the runtime execution. Patterns of heap spraying attacks are based on analysis of how an web browser accesses benign web sites. The heap objects are executed forcibly by changing the instruction register into the address of them after being loaded into memory. Thus, we can execute the malicious code without having to consider the version and patch status of the browser. An object is considered to contain a malicious code if the execution reaches a call instruction and then the instruction accesses the API of system libraries, such as kernel32.dll and ws_32.dll. To change registers and monitor execution flow, we used a debugger engine. A prototype, named HERAD(HEap spRAying Detector), is implemented and evaluated. In experiments, HERAD detects various forms of exploit code that an emulation cannot detect, and some heap spraying attacks that NOZZLE cannot detect. Although it has an execution overhead, HERAD produces a low number of false alarms. The processing time of several minutes is negligible because our research focuses on detecting heap spraying. This research can be applied to existing systems that collect malicious codes, such as Honeypot.
Real-time detection of natural objects using AM-coded spectral matching imager
NASA Astrophysics Data System (ADS)
Kimachi, Akira
2004-12-01
This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.
Real-time detection of natural objects using AM-coded spectral matching imager
NASA Astrophysics Data System (ADS)
Kimachi, Akira
2005-01-01
This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.
Smartphone-based colorimetric analysis for detection of saliva alcohol concentration.
Jung, Youngkee; Kim, Jinhee; Awofeso, Olumide; Kim, Huisung; Regnier, Fred; Bae, Euiwon
2015-11-01
A simple device and associated analytical methods are reported. We provide objective and accurate determination of saliva alcohol concentrations using smartphone-based colorimetric imaging. The device utilizes any smartphone with a miniature attachment that positions the sample and provides constant illumination for sample imaging. Analyses of histograms based on channel imaging of red-green-blue (RGB) and hue-saturation-value (HSV) color space provide unambiguous determination of blood alcohol concentration from color changes on sample pads. A smartphone-based sample analysis by colorimetry was developed and tested with blind samples that matched with the training sets. This technology can be adapted to any smartphone and used to conduct color change assays.
Improvements in Space Surveillance Processing for Wide Field of View Optical Sensors
NASA Astrophysics Data System (ADS)
Sydney, P.; Wetterer, C.
2014-09-01
For more than a decade, an autonomous satellite tracking system at the Air Force Maui Optical and Supercomputing (AMOS) observatory has been generating routine astrometric measurements of Earth-orbiting Resident Space Objects (RSOs) using small commercial telescopes and sensors. Recent work has focused on developing an improved processing system, enhancing measurement performance and response while supporting other sensor systems and missions. This paper will outline improved techniques in scheduling, detection, astrometric and photometric measurements, and catalog maintenance. The processing system now integrates with Special Perturbation (SP) based astrodynamics algorithms, allowing covariance-based scheduling and more precise orbital estimates and object identification. A merit-based scheduling algorithm provides a global optimization framework to support diverse collection tasks and missions. The detection algorithms support a range of target tracking and camera acquisition rates. New comprehensive star catalogs allow for more precise astrometric and photometric calibrations including differential photometry for monitoring environmental changes. This paper will also examine measurement performance with varying tracking rates and acquisition parameters.
NASA Astrophysics Data System (ADS)
Zhong, Yanfei; Han, Xiaobing; Zhang, Liangpei
2018-04-01
Multi-class geospatial object detection from high spatial resolution (HSR) remote sensing imagery is attracting increasing attention in a wide range of object-related civil and engineering applications. However, the distribution of objects in HSR remote sensing imagery is location-variable and complicated, and how to accurately detect the objects in HSR remote sensing imagery is a critical problem. Due to the powerful feature extraction and representation capability of deep learning, the deep learning based region proposal generation and object detection integrated framework has greatly promoted the performance of multi-class geospatial object detection for HSR remote sensing imagery. However, due to the translation caused by the convolution operation in the convolutional neural network (CNN), although the performance of the classification stage is seldom influenced, the localization accuracies of the predicted bounding boxes in the detection stage are easily influenced. The dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage has not been addressed for HSR remote sensing imagery, and causes position accuracy problems for multi-class geospatial object detection with region proposal generation and object detection. In order to further improve the performance of the region proposal generation and object detection integrated framework for HSR remote sensing imagery object detection, a position-sensitive balancing (PSB) framework is proposed in this paper for multi-class geospatial object detection from HSR remote sensing imagery. The proposed PSB framework takes full advantage of the fully convolutional network (FCN), on the basis of a residual network, and adopts the PSB framework to solve the dilemma between translation-invariance in the classification stage and translation-variance in the object detection stage. In addition, a pre-training mechanism is utilized to accelerate the training procedure and increase the robustness of the proposed algorithm. The proposed algorithm is validated with a publicly available 10-class object detection dataset.
Systematic evaluation of deep learning based detection frameworks for aerial imagery
NASA Astrophysics Data System (ADS)
Sommer, Lars; Steinmann, Lucas; Schumann, Arne; Beyerer, Jürgen
2018-04-01
Object detection in aerial imagery is crucial for many applications in the civil and military domain. In recent years, deep learning based object detection frameworks significantly outperformed conventional approaches based on hand-crafted features on several datasets. However, these detection frameworks are generally designed and optimized for common benchmark datasets, which considerably differ from aerial imagery especially in object sizes. As already demonstrated for Faster R-CNN, several adaptations are necessary to account for these differences. In this work, we adapt several state-of-the-art detection frameworks including Faster R-CNN, R-FCN, and Single Shot MultiBox Detector (SSD) to aerial imagery. We discuss adaptations that mainly improve the detection accuracy of all frameworks in detail. As the output of deeper convolutional layers comprise more semantic information, these layers are generally used in detection frameworks as feature map to locate and classify objects. However, the resolution of these feature maps is insufficient for handling small object instances, which results in an inaccurate localization or incorrect classification of small objects. Furthermore, state-of-the-art detection frameworks perform bounding box regression to predict the exact object location. Therefore, so called anchor or default boxes are used as reference. We demonstrate how an appropriate choice of anchor box sizes can considerably improve detection performance. Furthermore, we evaluate the impact of the performed adaptations on two publicly available datasets to account for various ground sampling distances or differing backgrounds. The presented adaptations can be used as guideline for further datasets or detection frameworks.
Selective visual attention in object detection processes
NASA Astrophysics Data System (ADS)
Paletta, Lucas; Goyal, Anurag; Greindl, Christian
2003-03-01
Object detection is an enabling technology that plays a key role in many application areas, such as content based media retrieval. Attentive cognitive vision systems are here proposed where the focus of attention is directed towards the most relevant target. The most promising information is interpreted in a sequential process that dynamically makes use of knowledge and that enables spatial reasoning on the local object information. The presented work proposes an innovative application of attention mechanisms for object detection which is most general in its understanding of information and action selection. The attentive detection system uses a cascade of increasingly complex classifiers for the stepwise identification of regions of interest (ROIs) and recursively refined object hypotheses. While the most coarse classifiers are used to determine first approximations on a region of interest in the input image, more complex classifiers are used for more refined ROIs to give more confident estimates. Objects are modelled by local appearance based representations and in terms of posterior distributions of the object samples in eigenspace. The discrimination function to discern between objects is modeled by a radial basis functions (RBF) network that has been compared with alternative networks and been proved consistent and superior to other artifical neural networks for appearance based object recognition. The experiments were led for the automatic detection of brand objects in Formula One broadcasts within the European Commission's cognitive vision project DETECT.
Shadow Detection Based on Regions of Light Sources for Object Extraction in Nighttime Video
Lee, Gil-beom; Lee, Myeong-jin; Lee, Woo-Kyung; Park, Joo-heon; Kim, Tae-Hwan
2017-01-01
Intelligent video surveillance systems detect pre-configured surveillance events through background modeling, foreground and object extraction, object tracking, and event detection. Shadow regions inside video frames sometimes appear as foreground objects, interfere with ensuing processes, and finally degrade the event detection performance of the systems. Conventional studies have mostly used intensity, color, texture, and geometric information to perform shadow detection in daytime video, but these methods lack the capability of removing shadows in nighttime video. In this paper, a novel shadow detection algorithm for nighttime video is proposed; this algorithm partitions each foreground object based on the object’s vertical histogram and screens out shadow objects by validating their orientations heading toward regions of light sources. From the experimental results, it can be seen that the proposed algorithm shows more than 93.8% shadow removal and 89.9% object extraction rates for nighttime video sequences, and the algorithm outperforms conventional shadow removal algorithms designed for daytime videos. PMID:28327515
Multi-object detection and tracking technology based on hexagonal opto-electronic detector
NASA Astrophysics Data System (ADS)
Song, Yong; Hao, Qun; Li, Xiang
2008-02-01
A novel multi-object detection and tracking technology based on hexagonal opto-electronic detector is proposed, in which (1) a new hexagonal detector, which is composed of 6 linear CCDs, has been firstly developed to achieve the field of view of 360 degree, (2) to achieve the detection and tracking of multi-object with high speed, the object recognition criterions of Object Signal Width Criterion (OSWC) and Horizontal Scale Ratio Criterion (HSRC) are proposed. In this paper, Simulated Experiments have been carried out to verify the validity of the proposed technology, which show that the detection and tracking of multi-object can be achieved with high speed by using the proposed hexagonal detector and the criterions of OSWC and HSRC, indicating that the technology offers significant advantages in Photo-electric Detection, Computer Vision, Virtual Reality, Augment Reality, etc.
2017-03-01
A Low- Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller Jong Hwan Ko...Atlanta, GA 30332 USA Contact Author Email: jonghwan.ko@gatech.edu Abstract: This paper presents a low- power wireless image sensor node for...present a low- power wireless image sensor node with a noise-robust moving object detection and region-of-interest based rate controller [Fig. 1]. The
System and method for automated object detection in an image
Kenyon, Garrett T.; Brumby, Steven P.; George, John S.; Paiton, Dylan M.; Schultz, Peter F.
2015-10-06
A contour/shape detection model may use relatively simple and efficient kernels to detect target edges in an object within an image or video. A co-occurrence probability may be calculated for two or more edge features in an image or video using an object definition. Edge features may be differentiated between in response to measured contextual support, and prominent edge features may be extracted based on the measured contextual support. The object may then be identified based on the extracted prominent edge features.
Detection of dominant flow and abnormal events in surveillance video
NASA Astrophysics Data System (ADS)
Kwak, Sooyeong; Byun, Hyeran
2011-02-01
We propose an algorithm for abnormal event detection in surveillance video. The proposed algorithm is based on a semi-unsupervised learning method, a kind of feature-based approach so that it does not detect the moving object individually. The proposed algorithm identifies dominant flow without individual object tracking using a latent Dirichlet allocation model in crowded environments. It can also automatically detect and localize an abnormally moving object in real-life video. The performance tests are taken with several real-life databases, and their results show that the proposed algorithm can efficiently detect abnormally moving objects in real time. The proposed algorithm can be applied to any situation in which abnormal directions or abnormal speeds are detected regardless of direction.
Object Detection Based on Template Matching through Use of Best-So-Far ABC
2014-01-01
Best-so-far ABC is a modified version of the artificial bee colony (ABC) algorithm used for optimization tasks. This algorithm is one of the swarm intelligence (SI) algorithms proposed in recent literature, in which the results demonstrated that the best-so-far ABC can produce higher quality solutions with faster convergence than either the ordinary ABC or the current state-of-the-art ABC-based algorithm. In this work, we aim to apply the best-so-far ABC-based approach for object detection based on template matching by using the difference between the RGB level histograms corresponding to the target object and the template object as the objective function. Results confirm that the proposed method was successful in both detecting objects and optimizing the time used to reach the solution. PMID:24812556
[Research on early fire detection with CO-CO2 FTIR-spectroscopy].
Du, Jian-hua; Zhang, Ren-cheng; Huang, Xiang-ying; Gong, Xue; Zhang, Xiao-hua
2007-05-01
A new fire detection method is put forward based on the theory of FTIR spectroscopy through analyzing all kinds of detection methods, in which CO and CO2 are chosen as early fire detection objects, and an early fire experiment system has been set up. The concentration characters of CO and CO2 were obtained through early fire experiments including real alarm sources and nuisance alarm sources. In real alarm sources there are abundant CO and CO2 which change regularly. In nuisance alarm sources there is almost no CO. So it's feasible to reduce the false alarms and increase the sensitivity of early fire detectors through analyzing the concentration characters of CO and CO2.
Vegetation Monitoring of Mashhad Using AN Object-Oriented POST Classification Comparison Method
NASA Astrophysics Data System (ADS)
Khalili Moghadam, N.; Delavar, M. R.; Forati, A.
2017-09-01
By and large, todays mega cities are confronting considerable urban development in which many new buildings are being constructed in fringe areas of these cities. This remarkable urban development will probably end in vegetation reduction even though each mega city requires adequate areas of vegetation, which is considered to be crucial and helpful for these cities from a wide variety of perspectives such as air pollution reduction, soil erosion prevention, and eco system as well as environmental protection. One of the optimum methods for monitoring this vital component of each city is multi-temporal satellite images acquisition and using change detection techniques. In this research, the vegetation and urban changes of Mashhad, Iran, were monitored using an object-oriented (marker-based watershed algorithm) post classification comparison (PCC) method. A Bi-temporal multi-spectral Landsat satellite image was used from the study area to detect the changes of urban and vegetation areas and to find a relation between these changes. The results of this research demonstrate that during 1987-2017, Mashhad urban area has increased about 22525 hectares and the vegetation area has decreased approximately 4903 hectares. These statistics substantiate the close relationship between urban development and vegetation reduction. Moreover, the overall accuracies of 85.5% and 91.2% were achieved for the first and the second image classification, respectively. In addition, the overall accuracy and kappa coefficient of change detection were assessed 84.1% and 70.3%, respectively.
Rapid Target Detection in High Resolution Remote Sensing Images Using Yolo Model
NASA Astrophysics Data System (ADS)
Wu, Z.; Chen, X.; Gao, Y.; Li, Y.
2018-04-01
Object detection in high resolution remote sensing images is a fundamental and challenging problem in the field of remote sensing imagery analysis for civil and military application due to the complex neighboring environments, which can cause the recognition algorithms to mistake irrelevant ground objects for target objects. Deep Convolution Neural Network(DCNN) is the hotspot in object detection for its powerful ability of feature extraction and has achieved state-of-the-art results in Computer Vision. Common pipeline of object detection based on DCNN consists of region proposal, CNN feature extraction, region classification and post processing. YOLO model frames object detection as a regression problem, using a single CNN predicts bounding boxes and class probabilities in an end-to-end way and make the predict faster. In this paper, a YOLO based model is used for object detection in high resolution sensing images. The experiments on NWPU VHR-10 dataset and our airport/airplane dataset gain from GoogleEarth show that, compare with the common pipeline, the proposed model speeds up the detection process and have good accuracy.
3D change detection at street level using mobile laser scanning point clouds and terrestrial images
NASA Astrophysics Data System (ADS)
Qin, Rongjun; Gruen, Armin
2014-04-01
Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical consistency between point clouds and stereo images. Finally, an over-segmentation based graph cut optimization is carried out, taking into account the color, depth and class information to compute the changed area in the image space. The proposed method is invariant to light changes, robust to small co-registration errors between images and point clouds, and can be applied straightforwardly to 3D polyhedral models. This method can be used for 3D street data updating, city infrastructure management and damage monitoring in complex urban scenes.
Neural basis for dynamic updating of object representation in visual working memory.
Takahama, Sachiko; Miyauchi, Satoru; Saiki, Jun
2010-02-15
In real world, objects have multiple features and change dynamically. Thus, object representations must satisfy dynamic updating and feature binding. Previous studies have investigated the neural activity of dynamic updating or feature binding alone, but not both simultaneously. We investigated the neural basis of feature-bound object representation in a dynamically updating situation by conducting a multiple object permanence tracking task, which required observers to simultaneously process both the maintenance and dynamic updating of feature-bound objects. Using an event-related design, we separated activities during memory maintenance and change detection. In the search for regions showing selective activation in dynamic updating of feature-bound objects, we identified a network during memory maintenance that was comprised of the inferior precentral sulcus, superior parietal lobule, and middle frontal gyrus. In the change detection period, various prefrontal regions, including the anterior prefrontal cortex, were activated. In updating object representation of dynamically moving objects, the inferior precentral sulcus closely cooperates with a so-called "frontoparietal network", and subregions of the frontoparietal network can be decomposed into those sensitive to spatial updating and feature binding. The anterior prefrontal cortex identifies changes in object representation by comparing memory and perceptual representations rather than maintaining object representations per se, as previously suggested. Copyright 2009 Elsevier Inc. All rights reserved.
Histopathological image analysis of chemical-induced hepatocellular hypertrophy in mice.
Asaoka, Yoshiji; Togashi, Yuko; Mutsuga, Mayu; Imura, Naoko; Miyoshi, Tomoya; Miyamoto, Yohei
2016-04-01
Chemical-induced hepatocellular hypertrophy is frequently observed in rodents, and is mostly caused by the induction of phase I and phase II drug metabolic enzymes and peroxisomal lipid metabolic enzymes. Liver weight is a sensitive and commonly used marker for detecting hepatocellular hypertrophy, but is also increased by a number of other factors. Histopathological observations subjectively detect changes such as hepatocellular hypertrophy based on the size of a hepatocyte. Therefore, quantitative microscopic observations are required to evaluate histopathological alterations objectively. In the present study, we developed a novel quantitative method for an image analysis of hepatocellular hypertrophy using liver sections stained with hematoxylin and eosin, and demonstrated its usefulness for evaluating hepatocellular hypertrophy induced by phenobarbital (a phase I and phase II enzyme inducer) and clofibrate (a peroxisomal enzyme inducer) in mice. The algorithm of this imaging analysis was designed to recognize an individual hepatocyte through a combination of pixel-based and object-based analyses. Hepatocellular nuclei and the surrounding non-hepatocellular cells were recognized by the pixel-based analysis, while the areas of the recognized hepatocellular nuclei were then expanded until they ran against their expanding neighboring hepatocytes and surrounding non-hepatocellular cells by the object-based analysis. The expanded area of each hepatocellular nucleus was regarded as the size of an individual hepatocyte. The results of this imaging analysis showed that changes in the sizes of hepatocytes corresponded with histopathological observations in phenobarbital and clofibrate-treated mice, and revealed a correlation between hepatocyte size and liver weight. In conclusion, our novel image analysis method is very useful for quantitative evaluations of chemical-induced hepatocellular hypertrophy. Copyright © 2015 Elsevier GmbH. All rights reserved.
The perception of object versus objectless motion.
Hock, Howard S; Nichols, David F
2013-05-01
Wertheimer, M. (Zeitschrift für Psychologie und Physiologie der Sinnesorgane, 61:161-265, 1912) classical distinction between beta (object) and phi (objectless) motion is elaborated here in a series of experiments concerning competition between two qualitatively different motion percepts, induced by sequential changes in luminance for two-dimensional geometric objects composed of rectangular surfaces. One of these percepts is of spreading-luminance motion that continuously sweeps across the entire object; it exhibits shape invariance and is perceived most strongly for fast speeds. Significantly for the characterization of phi as objectless motion, the spreading luminance does not involve surface boundaries or any other feature; the percept is driven solely by spatiotemporal changes in luminance. Alternatively, and for relatively slow speeds, a discrete series of edge motions can be perceived in the direction opposite to spreading-luminance motion. Akin to beta motion, the edges appear to move through intermediate positions within the object's changing surfaces. Significantly for the characterization of beta as object motion, edge motion exhibits shape dependence and is based on the detection of oppositely signed changes in contrast (i.e., counterchange) for features essential to the determination of an object's shape, the boundaries separating its surfaces. These results are consistent with area MT neurons that differ with respect to speed preference Newsome et al (Journal of Neurophysiology, 55:1340-1351, 1986) and shape dependence Zeki (Journal of Physiology, 236:549-573, 1974).
NASA Technical Reports Server (NTRS)
Boyle, Devin K.
2017-01-01
The Vehicle Integrated Propulsion Research (VIPR) Phase III project was executed at Edwards Air Force Base, California, by the National Aeronautics and Space Administration and several industry, academic, and government partners in the summer of 2015. One of the research objectives was to use external radial acoustic microphone arrays to detect changes in the noise characteristics produced by the research engine during volcanic ash ingestion and seeded fault insertion scenarios involving bleed air valves. Preliminary results indicate the successful acoustic detection of suspected degradation as a result of cumulative exposure to volcanic ash. This detection is shown through progressive changes, particularly in the high-frequency content, as a function of exposure to greater cumulative quantities of ash. Additionally, detection of the simulated failure of the 14th stage stability bleed valve and, to a lesser extent, the station 2.5 stability bleed valve, to their fully-open fail-safe positions was achieved by means of spectral comparisons between nominal (normal valve operation) and seeded fault scenarios.
Solution to the SLAM problem in low dynamic environments using a pose graph and an RGB-D sensor.
Lee, Donghwa; Myung, Hyun
2014-07-11
In this study, we propose a solution to the simultaneous localization and mapping (SLAM) problem in low dynamic environments by using a pose graph and an RGB-D (red-green-blue depth) sensor. The low dynamic environments refer to situations in which the positions of objects change over long intervals. Therefore, in the low dynamic environments, robots have difficulty recognizing the repositioning of objects unlike in highly dynamic environments in which relatively fast-moving objects can be detected using a variety of moving object detection algorithms. The changes in the environments then cause groups of false loop closing when the same moved objects are observed for a while, which means that conventional SLAM algorithms produce incorrect results. To address this problem, we propose a novel SLAM method that handles low dynamic environments. The proposed method uses a pose graph structure and an RGB-D sensor. First, to prune the falsely grouped constraints efficiently, nodes of the graph, that represent robot poses, are grouped according to the grouping rules with noise covariances. Next, false constraints of the pose graph are pruned according to an error metric based on the grouped nodes. The pose graph structure is reoptimized after eliminating the false information, and the corrected localization and mapping results are obtained. The performance of the method was validated in real experiments using a mobile robot system.
The Analysis of Object-Based Change Detection in Mining Area: a Case Study with Pingshuo Coal Mine
NASA Astrophysics Data System (ADS)
Zhang, M.; Zhou, W.; Li, Y.
2017-09-01
Accurate information on mining land use and land cover change are crucial for monitoring and environmental change studies. In this paper, RapidEye Remote Sensing Image (Map 2012) and SPOT7 Remote Sensing Image (Map 2015) in Pingshuo Mining Area are selected to monitor changes combined with object-based classification and change vector analysis method, we also used R in highresolution remote sensing image for mining land classification, and found the feasibility and the flexibility of open source software. The results show that (1) the classification of reclaimed mining land has higher precision, the overall accuracy and kappa coefficient of the classification of the change region map were 86.67 % and 89.44 %. It's obvious that object-based classification and change vector analysis which has a great significance to improve the monitoring accuracy can be used to monitor mining land, especially reclaiming mining land; (2) the vegetation area changed from 46 % to 40 % accounted for the proportion of the total area from 2012 to 2015, and most of them were transformed into the arable land. The sum of arable land and vegetation area increased from 51 % to 70 %; meanwhile, build-up land has a certain degree of increase, part of the water area was transformed into arable land, but the extent of the two changes is not obvious. The result illustrated the transformation of reclaimed mining area, at the same time, there is still some land convert to mining land, and it shows the mine is still operating, mining land use and land cover are the dynamic procedure.
On resilience studies of system detection and recovery techniques against stealthy insider attacks
NASA Astrophysics Data System (ADS)
Wei, Sixiao; Zhang, Hanlin; Chen, Genshe; Shen, Dan; Yu, Wei; Pham, Khanh D.; Blasch, Erik P.; Cruz, Jose B.
2016-05-01
With the explosive growth of network technologies, insider attacks have become a major concern to business operations that largely rely on computer networks. To better detect insider attacks that marginally manipulate network traffic over time, and to recover the system from attacks, in this paper we implement a temporal-based detection scheme using the sequential hypothesis testing technique. Two hypothetical states are considered: the null hypothesis that the collected information is from benign historical traffic and the alternative hypothesis that the network is under attack. The objective of such a detection scheme is to recognize the change within the shortest time by comparing the two defined hypotheses. In addition, once the attack is detected, a server migration-based system recovery scheme can be triggered to recover the system to the state prior to the attack. To understand mitigation of insider attacks, a multi-functional web display of the detection analysis was developed for real-time analytic. Experiments using real-world traffic traces evaluate the effectiveness of Detection System and Recovery (DeSyAR) scheme. The evaluation data validates the detection scheme based on sequential hypothesis testing and the server migration-based system recovery scheme can perform well in effectively detecting insider attacks and recovering the system under attack.
Detection of Tree Crowns Based on Reclassification Using Aerial Images and LIDAR Data
NASA Astrophysics Data System (ADS)
Talebi, S.; Zarea, A.; Sadeghian, S.; Arefi, H.
2013-09-01
Tree detection using aerial sensors in early decades was focused by many researchers in different fields including Remote Sensing and Photogrammetry. This paper is intended to detect trees in complex city areas using aerial imagery and laser scanning data. Our methodology is a hierarchal unsupervised method consists of some primitive operations. This method could be divided into three sections, in which, first section uses aerial imagery and both second and third sections use laser scanners data. In the first section a vegetation cover mask is created in both sunny and shadowed areas. In the second section Rate of Slope Change (RSC) is used to eliminate grasses. In the third section a Digital Terrain Model (DTM) is obtained from LiDAR data. By using DTM and Digital Surface Model (DSM) we would get to Normalized Digital Surface Model (nDSM). Then objects which are lower than a specific height are eliminated. Now there are three result layers from three sections. At the end multiplication operation is used to get final result layer. This layer will be smoothed by morphological operations. The result layer is sent to WG III/4 to evaluate. The evaluation result shows that our method has a good rank in comparing to other participants' methods in ISPRS WG III/4, when assessed in terms of 5 indices including area base completeness, area base correctness, object base completeness, object base correctness and boundary RMS. With regarding of being unsupervised and automatic, this method is improvable and could be integrate with other methods to get best results.
Human visual system-based smoking event detection
NASA Astrophysics Data System (ADS)
Odetallah, Amjad D.; Agaian, Sos S.
2012-06-01
Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks, airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the object's shape, texture and color. In addition, its visual features will change under different lighting and weather conditions. This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of images. In developed system, motion detection and background subtraction are combined with motion-region-saving, skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions with various cigarette sizes, colors, and shapes.
Simple Radiowave-Based Method For Measuring Peripheral Blood Flow Project
NASA Technical Reports Server (NTRS)
Oliva-Buisson, Yvette J.
2014-01-01
Project objective is to design small radio frequency based flow probes for the measurement of blood flow velocity in peripheral arteries such as the femoral artery and middle cerebral artery. The result will be the technological capability to measure peripheral blood flow rates and flow changes during various environmental stressors such as microgravity without contact to the individual being monitored. This technology may also lead to an easier method of detecting venous gas emboli during extravehicular activities.
Vergauwe, Evie; Cowan, Nelson
2015-01-01
We compared two contrasting hypotheses of how multi-featured objects are stored in visual working memory (vWM): as integrated objects or as independent features. A new procedure was devised to examine vWM representations of several concurrently-held objects and their features and our main measure was reaction time (RT), allowing an examination of the real-time search through features and/or objects in an array in vWM. Response speeds to probes with color, shape or both were studied as a function of the number of memorized colored shapes. Four testing groups were created by varying the instructions and the way in which probes with both color and shape were presented. The instructions explicitly either encouraged or discouraged the use of binding information and the task-relevance of binding information was further suggested by presenting probes with both color and shapes as either integrated objects or independent features. Our results show that the unit used for retrieval from vWM depends on the testing situation. Search was fully object-based only when all factors support that basis of search, in which case retrieving two features took no longer than retrieving a single feature. Otherwise, retrieving two features took longer than retrieving a single feature. Additional analyses of change detection latency suggested that, even though different testing situations can result in a stronger emphasis on either the feature dimension or the object dimension, neither one disappears from the representation and both concurrently affect change detection performance. PMID:25705873
Vergauwe, Evie; Cowan, Nelson
2015-09-01
We compared two contrasting hypotheses of how multifeatured objects are stored in visual working memory (vWM); as integrated objects or as independent features. A new procedure was devised to examine vWM representations of several concurrently held objects and their features and our main measure was reaction time (RT), allowing an examination of the real-time search through features and/or objects in an array in vWM. Response speeds to probes with color, shape, or both were studied as a function of the number of memorized colored shapes. Four testing groups were created by varying the instructions and the way in which probes with both color and shape were presented. The instructions explicitly either encouraged or discouraged the use of binding information and the task-relevance of binding information was further suggested by presenting probes with both color and shapes as either integrated objects or independent features. Our results show that the unit used for retrieval from vWM depends on the testing situation. Search was fully object-based only when all factors support that basis of search, in which case retrieving 2 features took no longer than retrieving a single feature. Otherwise, retrieving 2 features took longer than retrieving a single feature. Additional analyses of change detection latency suggested that, even though different testing situations can result in a stronger emphasis on either the feature dimension or the object dimension, neither one disappears from the representation and both concurrently affect change detection performance. (c) 2015 APA, all rights reserved).
A New Moving Object Detection Method Based on Frame-difference and Background Subtraction
NASA Astrophysics Data System (ADS)
Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong
2017-09-01
Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.
Detection of Greenhouse-Gas-Induced Climatic Change
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, P.D.; Wigley, T.M.L.
1998-05-26
The objective of this report is to assemble and analyze instrumental climate data and to develop and apply climate models as a basis for (1) detecting greenhouse-gas-induced climatic change, and (2) validation of General Circulation Models.
On improving IED object detection by exploiting scene geometry using stereo processing
NASA Astrophysics Data System (ADS)
van de Wouw, Dennis W. J. M.; Dubbelman, Gijs; de With, Peter H. N.
2015-03-01
Detecting changes in the environment with respect to an earlier data acquisition is important for several applications, such as finding Improvised Explosive Devices (IEDs). We explore and evaluate the benefit of depth sensing in the context of automatic change detection, where an existing monocular system is extended with a second camera in a fixed stereo setup. We then propose an alternative frame registration that exploits scene geometry, in particular the ground plane. Furthermore, change characterization is applied to localized depth maps to distinguish between 3D physical changes and shadows, which solves one of the main challenges of a monocular system. The proposed system is evaluated on real-world acquisitions, containing geo-tagged test objects of 18 18 9 cm up to a distance of 60 meters. The proposed extensions lead to a significant reduction of the false-alarm rate by a factor of 3, while simultaneously improving the detection score with 5%.
Automated object detection and tracking with a flash LiDAR system
NASA Astrophysics Data System (ADS)
Hammer, Marcus; Hebel, Marcus; Arens, Michael
2016-10-01
The detection of objects, or persons, is a common task in the fields of environment surveillance, object observation or danger defense. There are several approaches for automated detection with conventional imaging sensors as well as with LiDAR sensors, but for the latter the real-time detection is hampered by the scanning character and therefore by the data distortion of most LiDAR systems. The paper presents a solution for real-time data acquisition of a flash LiDAR sensor with synchronous raw data analysis, point cloud calculation, object detection, calculation of the next best view and steering of the pan-tilt head of the sensor. As a result the attention is always focused on the object, independent of the behavior of the object. Even for highly volatile and rapid changes in the direction of motion the object is kept in the field of view. The experimental setup used in this paper is realized with an elementary person detection algorithm in medium distances (20 m to 60 m) to show the efficiency of the system for objects with a high angular speed. It is easy to replace the detection part by any other object detection algorithm and thus it is easy to track nearly any object, for example a car or a boat or an UAV in various distances.
Vehicle Localization by LIDAR Point Correlation Improved by Change Detection
NASA Astrophysics Data System (ADS)
Schlichting, A.; Brenner, C.
2016-06-01
LiDAR sensors are proven sensors for accurate vehicle localization. Instead of detecting and matching features in the LiDAR data, we want to use the entire information provided by the scanners. As dynamic objects, like cars, pedestrians or even construction sites could lead to wrong localization results, we use a change detection algorithm to detect these objects in the reference data. If an object occurs in a certain number of measurements at the same position, we mark it and every containing point as static. In the next step, we merge the data of the single measurement epochs to one reference dataset, whereby we only use static points. Further, we also use a classification algorithm to detect trees. For the online localization of the vehicle, we use simulated data of a vertical aligned automotive LiDAR sensor. As we only want to use static objects in this case as well, we use a random forest classifier to detect dynamic scan points online. Since the automotive data is derived from the LiDAR Mobile Mapping System, we are able to use the labelled objects from the reference data generation step to create the training data and further to detect dynamic objects online. The localization then can be done by a point to image correlation method using only static objects. We achieved a localization standard deviation of about 5 cm (position) and 0.06° (heading), and were able to successfully localize the vehicle in about 93 % of the cases along a trajectory of 13 km in Hannover, Germany.
Denomme, Ryan C; Lu, Zhao; Martel, Sylvain
2007-01-01
The proposed Magnetotactic Bacteria (MTB) based bio-carrier has the potential to greatly improve pathogenic bacteria detection time, specificity, and sensitivity. Microbeads are attached to the MTB and are modified with a coating of an antibody or phage that is specific to the target pathogenic bacteria. Using magnetic fields, the modified MTB are swept through a solution and the target bacteria present become attached to the microbeads (due to the coating). Then, the MTB are brought to the detection region and the number of pathogenic bacteria is determined. The high swimming speed and controllability of the MTB make this method ideal for the fast detection of small concentrations of specific bacteria. This paper focuses on an impedimetric detection system that will be used to identify if a target bacterium is attached to the microbead. The proposed detection system measures changes in electrical impedance as objects (MTB, microbeads, and pathogenic bacteria) pass through a set of microelectrodes embedded in a microfluidic device. FEM simulation is used to acquire the optimized parameters for the design of such a system. Specifically, factors such as electrode/detection channel geometry, object size and position, which have direct effects on the detection sensitivity for a single bacterium or microparticle, are investigated. Polymer microbeads and the MTB system with an E. coli bacterium are considered to investigate their impedance variations. Furthermore, preliminary experimental data using a microfabricated microfluidic device connected to an impedance analyzer are presented.
Grasp Preparation Improves Change Detection for Congruent Objects
ERIC Educational Resources Information Center
Symes, Ed; Tucker, Mike; Ellis, Rob; Vainio, Lari; Ottoboni, Giovanni
2008-01-01
A series of experiments provided converging support for the hypothesis that action preparation biases selective attention to action-congruent object features. When visual transients are masked in so-called "change-blindness scenes," viewers are blind to substantial changes between 2 otherwise identical pictures that flick back and forth. The…
Scene incongruity and attention.
Mack, Arien; Clarke, Jason; Erol, Muge; Bert, John
2017-02-01
Does scene incongruity, (a mismatch between scene gist and a semantically incongruent object), capture attention and lead to conscious perception? We explored this question using 4 different procedures: Inattention (Experiment 1), Scene description (Experiment 2), Change detection (Experiment 3), and Iconic Memory (Experiment 4). We found no differences between scene incongruity and scene congruity in Experiments 1, 2, and 4, although in Experiment 3 change detection was faster for scenes containing an incongruent object. We offer an explanation for why the change detection results differ from the results of the other three experiments. In all four experiments, participants invariably failed to report the incongruity and routinely mis-described it by normalizing the incongruent object. None of the results supports the claim that semantic incongruity within a scene invariably captures attention and provide strong evidence of the dominant role of scene gist in determining what is perceived. Copyright © 2016 Elsevier Inc. All rights reserved.
Figure-ground segregation by motion contrast and by luminance contrast.
Regan, D; Beverley, K I
1984-05-01
Some naturally camouflaged objects are invisible unless they move; their boundaries are then defined by motion contrast between object and background. We compared the visual detection of such camouflaged objects with the detection of objects whose boundaries were defined by luminance contrast. The summation field area is 0.16 deg2 , and the summation time constant is 750 msec for parafoveally viewed objects whose boundaries are defined by motion contrast; these values are, respectively, about 5 and 12 times larger than the corresponding values for objects defined by luminance contrast. The log detection threshold is proportional to the eccentricity for a camouflaged object of constant area. The effect of eccentricity on threshold is less for large objects than for small objects. The log summation field diameter for detecting camouflaged objects is roughly proportional to the eccentricity, increasing to about 20 deg at 32-deg eccentricity. In contrast to the 100:1 increase of summation area for detecting camouflaged objects, the temporal summation time constant changes by only 40% between eccentricities of 0 and 16 deg.
Accessing long-term memory representations during visual change detection.
Beck, Melissa R; van Lamsweerde, Amanda E
2011-04-01
In visual change detection tasks, providing a cue to the change location concurrent with the test image (post-cue) can improve performance, suggesting that, without a cue, not all encoded representations are automatically accessed. Our studies examined the possibility that post-cues can encourage the retrieval of representations stored in long-term memory (LTM). Participants detected changes in images composed of familiar objects. Performance was better when the cue directed attention to the post-change object. Supporting the role of LTM in the cue effect, the effect was similar regardless of whether the cue was presented during the inter-stimulus interval, concurrent with the onset of the test image, or after the onset of the test image. Furthermore, the post-cue effect and LTM performance were similarly influenced by encoding time. These findings demonstrate that monitoring the visual world for changes does not automatically engage LTM retrieval.
Automated baseline change detection -- Phases 1 and 2. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byler, E.
1997-10-31
The primary objective of this project is to apply robotic and optical sensor technology to the operational inspection of mixed toxic and radioactive waste stored in barrels, using Automated Baseline Change Detection (ABCD), based on image subtraction. Absolute change detection is based on detecting any visible physical changes, regardless of cause, between a current inspection image of a barrel and an archived baseline image of the same barrel. Thus, in addition to rust, the ABCD system can also detect corrosion, leaks, dents, and bulges. The ABCD approach and method rely on precise camera positioning and repositioning relative to the barrelmore » and on feature recognition in images. The ABCD image processing software was installed on a robotic vehicle developed under a related DOE/FETC contract DE-AC21-92MC29112 Intelligent Mobile Sensor System (IMSS) and integrated with the electronics and software. This vehicle was designed especially to navigate in DOE Waste Storage Facilities. Initial system testing was performed at Fernald in June 1996. After some further development and more extensive integration the prototype integrated system was installed and tested at the Radioactive Waste Management Facility (RWMC) at INEEL beginning in April 1997 through the present (November 1997). The integrated system, composed of ABCD imaging software and IMSS mobility base, is called MISS EVE (Mobile Intelligent Sensor System--Environmental Validation Expert). Evaluation of the integrated system in RWMC Building 628, containing approximately 10,000 drums, demonstrated an easy to use system with the ability to properly navigate through the facility, image all the defined drums, and process the results into a report delivered to the operator on a GUI interface and on hard copy. Further work is needed to make the brassboard system more operationally robust.« less
A Calibrated H-alpha Index to Monitor Emission Line Objects
NASA Astrophysics Data System (ADS)
Hintz, Eric G.; Joner, M. D.
2013-06-01
Over an 8 year period we have developed a calibrated H-alpha index, similar to the more traditional H-beta index, based on spectrophotometric observations (Joner & Hintz, 2013) from the DAO 1.2-m Telescope. While developing the calibration for this filter set we also obtained spectra of a number of emission line systems such as high mass x-ray binaries (HMXB), Be stars, and young stellar objects. From this work we find that the main sequence stars fill a very tight relation in the H-alpha/H-beta plane and that the emission line objects are easily detected. We will present the overall location of these emission line objects. We will also present the changes experiences by these objects over the course of the years of the project.
Object Recognition using Feature- and Color-Based Methods
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu; Stubberud, Allen
2008-01-01
An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.
Multi-Complementary Model for Long-Term Tracking
Zhang, Deng; Zhang, Junchang; Xia, Chenyang
2018-01-01
In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170
NASA Astrophysics Data System (ADS)
Matikainen, L.; Karila, K.; Hyyppä, J.; Puttonen, E.; Litkey, P.; Ahokas, E.
2017-10-01
This article summarises our first results and experiences on the use of multispectral airborne laser scanner (ALS) data. Optech Titan multispectral ALS data over a large suburban area in Finland were acquired on three different dates in 2015-2016. We investigated the feasibility of the data from the first date for land cover classification and road mapping. Object-based analyses with segmentation and random forests classification were used. The potential of the data for change detection of buildings and roads was also demonstrated. The overall accuracy of land cover classification results with six classes was 96 % compared with validation points. The data also showed high potential for road detection, road surface classification and change detection. The multispectral intensity information appeared to be very important for automated classifications. Compared to passive aerial images, the intensity images have interesting advantages, such as the lack of shadows. Currently, we focus on analyses and applications with the multitemporal multispectral data. Important questions include, for example, the potential and challenges of the multitemporal data for change detection.
Karl, Jason W.; Gillan, Jeffrey K.; Barger, Nichole N.; Herrick, Jeffrey E.; Duniway, Michael C.
2014-01-01
The use of very high resolution (VHR; ground sampling distances < ∼5 cm) aerial imagery to estimate site vegetation cover and to detect changes from management has been well documented. However, as the purpose of monitoring is to document change over time, the ability to detect changes from imagery at the same or better level of accuracy and precision as those measured in situ must be assessed for image-based techniques to become reliable tools for ecosystem monitoring. Our objective with this study was to quantify the relationship between field-measured and image-interpreted changes in vegetation and ground cover measured one year apart in a Piñon and Juniper (P–J) woodland in southern Utah, USA. The study area was subject to a variety of fuel removal treatments between 2009 and 2010. We measured changes in plant community composition and ground cover along transects in a control area and three different treatments prior to and following P–J removal. We compared these measurements to vegetation composition and change based on photo-interpretation of ∼4 cm ground sampling distance imagery along similar transects. Estimates of cover were similar between field-based and image-interpreted methods in 2009 and 2010 for woody vegetation, no vegetation, herbaceous vegetation, and litter (including woody litter). Image-interpretation slightly overestimated cover for woody vegetation and no-vegetation classes (average difference between methods of 1.34% and 5.85%) and tended to underestimate cover for herbaceous vegetation and litter (average difference of −5.18% and 0.27%), but the differences were significant only for litter cover in 2009. Level of agreement between the field-measurements and image-interpretation was good for woody vegetation and no-vegetation classes (r between 0.47 and 0.89), but generally poorer for herbaceous vegetation and litter (r between 0.18 and 0.81) likely due to differences in image quality by year and the difficulty in discriminating fine vegetation and litter in imagery. Our results show that image interpretation to detect vegetation changes has utility for monitoring fuels reduction treatments in terms of woody vegetation and no-vegetation classes. The benefits of this technique are that it provides objective and repeatable measurements of site conditions that could be implemented relatively inexpensively and easily without the need for highly specialized software or technical expertise. Perhaps the biggest limitations of image interpretation to monitoring fuels treatments are challenges in estimating litter and herbaceous vegetation cover and the sensitivity of herbaceous cover estimates to image quality and shadowing.
Saliency-Guided Change Detection of Remotely Sensed Images Using Random Forest
NASA Astrophysics Data System (ADS)
Feng, W.; Sui, H.; Chen, X.
2018-04-01
Studies based on object-based image analysis (OBIA) representing the paradigm shift in change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA). Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA) algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.
Monitoring of Building Construction by 4D Change Detection Using Multi-temporal SAR Images
NASA Astrophysics Data System (ADS)
Yang, C. H.; Pang, Y.; Soergel, U.
2017-05-01
Monitoring urban changes is important for city management, urban planning, updating of cadastral map, etc. In contrast to conventional field surveys, which are usually expensive and slow, remote sensing techniques are fast and cost-effective alternatives. Spaceborne synthetic aperture radar (SAR) sensors provide radar images captured rapidly over vast areas at fine spatiotemporal resolution. In addition, the active microwave sensors are capable of day-and-night vision and independent of weather conditions. These advantages make multi-temporal SAR images suitable for scene monitoring. Persistent scatterer interferometry (PSI) detects and analyses PS points, which are characterized by strong, stable, and coherent radar signals throughout a SAR image sequence and can be regarded as substructures of buildings in built-up cities. Attributes of PS points, for example, deformation velocities, are derived and used for further analysis. Based on PSI, a 4D change detection technique has been developed to detect disappearance and emergence of PS points (3D) at specific times (1D). In this paper, we apply this 4D technique to the centre of Berlin, Germany, to investigate its feasibility and application for construction monitoring. The aims of the three case studies are to monitor construction progress, business districts, and single buildings, respectively. The disappearing and emerging substructures of the buildings are successfully recognized along with their occurrence times. The changed substructures are then clustered into single construction segments based on DBSCAN clustering and α-shape outlining for object-based analysis. Compared with the ground truth, these spatiotemporal results have proven able to provide more detailed information for construction monitoring.
Faint Object Detection in Multi-Epoch Observations via Catalog Data Fusion
NASA Astrophysics Data System (ADS)
Budavári, Tamás; Szalay, Alexander S.; Loredo, Thomas J.
2017-03-01
Astronomy in the time-domain era faces several new challenges. One of them is the efficient use of observations obtained at multiple epochs. The work presented here addresses faint object detection and describes an incremental strategy for separating real objects from artifacts in ongoing surveys. The idea is to produce low-threshold single-epoch catalogs and to accumulate information across epochs. This is in contrast to more conventional strategies based on co-added or stacked images. We adopt a Bayesian approach, addressing object detection by calculating the marginal likelihoods for hypotheses asserting that there is no object or one object in a small image patch containing at most one cataloged source at each epoch. The object-present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object; the no-object hypothesis interprets candidate sources as spurious, arising from noise peaks. We study the detection probability for constant-flux objects in a Gaussian noise setting, comparing results based on single and stacked exposures to results based on a series of single-epoch catalog summaries. Our procedure amounts to generalized cross-matching: it is the product of a factor accounting for the matching of the estimated fluxes of the candidate sources and a factor accounting for the matching of their estimated directions. We find that probabilistic fusion of multi-epoch catalogs can detect sources with similar sensitivity and selectivity compared to stacking. The probabilistic cross-matching framework underlying our approach plays an important role in maintaining detection sensitivity and points toward generalizations that could accommodate variability and complex object structure.
Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A
2015-01-01
Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.
A multi-camera system for real-time pose estimation
NASA Astrophysics Data System (ADS)
Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin
2007-04-01
This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.
Feature extraction for change analysis in SAR time series
NASA Astrophysics Data System (ADS)
Boldt, Markus; Thiele, Antje; Schulz, Karsten; Hinz, Stefan
2015-10-01
In remote sensing, the change detection topic represents a broad field of research. If time series data is available, change detection can be used for monitoring applications. These applications require regular image acquisitions at identical time of day along a defined period. Focusing on remote sensing sensors, radar is especially well-capable for applications requiring regularity, since it is independent from most weather and atmospheric influences. Furthermore, regarding the image acquisitions, the time of day plays no role due to the independence from daylight. Since 2007, the German SAR (Synthetic Aperture Radar) satellite TerraSAR-X (TSX) permits the acquisition of high resolution radar images capable for the analysis of dense built-up areas. In a former study, we presented the change analysis of the Stuttgart (Germany) airport. The aim of this study is the categorization of detected changes in the time series. This categorization is motivated by the fact that it is a poor statement only to describe where and when a specific area has changed. At least as important is the statement about what has caused the change. The focus is set on the analysis of so-called high activity areas (HAA) representing areas changing at least four times along the investigated period. As first step for categorizing these HAAs, the matching HAA changes (blobs) have to be identified. Afterwards, operating in this object-based blob level, several features are extracted which comprise shape-based, radiometric, statistic, morphological values and one context feature basing on a segmentation of the HAAs. This segmentation builds on the morphological differential attribute profiles (DAPs). Seven context classes are established: Urban, infrastructure, rural stable, rural unstable, natural, water and unclassified. A specific HA blob is assigned to one of these classes analyzing the CovAmCoh time series signature of the surrounding segments. In combination, also surrounding GIS information is included to verify the CovAmCoh based context assignment. In this paper, the focus is set on the features extracted for a later change categorization procedure.
Li, Michael H; Mestre, Tiago A; Fox, Susan H; Taati, Babak
2018-05-05
Technological solutions for quantifying Parkinson's disease (PD) symptoms may provide an objective means to track response to treatment, including side effects such as levodopa-induced dyskinesia. Vision-based systems are advantageous as they do not require physical contact with the body and have minimal instrumentation compared to wearables. We have developed a vision-based system to quantify a change in dyskinesia as reported by patients using 2D videos of clinical assessments during acute levodopa infusions. Nine participants with PD completed a total of 16 levodopa infusions, where they were asked to report important changes in dyskinesia (i.e. onset and remission). Participants were simultaneously rated using the UDysRS Part III (from video recordings analyzed post-hoc). Body joint positions and movements were tracked using a state-of-the-art deep learning pose estimation algorithm applied to the videos. 416 features (e.g. kinematics, frequency distribution) were extracted to characterize movements. The sensitivity and specificity of each feature to patient-reported changes in dyskinesia severity was computed and compared with physician-rated results. Features achieved similar or superior performance to the UDysRS for detecting the onset and remission of dyskinesia. The best AUC for detecting onset of dyskinesia was 0.822 and for remission of dyskinesia was 0.958, compared to 0.826 and 0.802 for the UDysRS. Video-based features may provide an objective means of quantifying the severity of levodopa-induced dyskinesia, and have responsiveness as good or better than the clinically-rated UDysRS. The results demonstrate encouraging evidence for future integration of video-based technology into clinical research and eventually clinical practice. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Zhihua; Yang, Xiaomei; Lu, Chen; Yang, Fengshuo
2018-07-01
Automatic updating of land use/cover change (LUCC) databases using high spatial resolution images (HSRI) is important for environmental monitoring and policy making, especially for coastal areas that connect the land and coast and that tend to change frequently. Many object-based change detection methods are proposed, especially those combining historical LUCC with HSRI. However, the scale parameter(s) segmenting the serial temporal images, which directly determines the average object size, is hard to choose without experts' intervention. And the samples transferred from historical LUCC also need experts' intervention to avoid insufficient or wrong samples. With respect to the scale parameter(s) choosing, a Scale Self-Adapting Segmentation (SSAS) approach based on the exponential sampling of a scale parameter and location of the local maximum of a weighted local variance was proposed to determine the scale selection problem when segmenting images constrained by LUCC for detecting changes. With respect to the samples transferring, Knowledge Transfer (KT), a classifier trained on historical images with LUCC and applied in the classification of updated images, was also proposed. Comparison experiments were conducted in a coastal area of Zhujiang, China, using SPOT 5 images acquired in 2005 and 2010. The results reveal that (1) SSAS can segment images more effectively without intervention of experts. (2) KT can also reach the maximum accuracy of samples transfer without experts' intervention. Strategy SSAS + KT would be a good choice if the temporal historical image and LUCC match, and the historical image and updated image are obtained from the same resource.
3D-information fusion from very high resolution satellite sensors
NASA Astrophysics Data System (ADS)
Krauss, T.; d'Angelo, P.; Kuschk, G.; Tian, J.; Partovi, T.
2015-04-01
In this paper we show the pre-processing and potential for environmental applications of very high resolution (VHR) satellite stereo imagery like these from WorldView-2 or Pl'eiades with ground sampling distances (GSD) of half a metre to a metre. To process such data first a dense digital surface model (DSM) has to be generated. Afterwards from this a digital terrain model (DTM) representing the ground and a so called normalized digital elevation model (nDEM) representing off-ground objects are derived. Combining these elevation based data with a spectral classification allows detection and extraction of objects from the satellite scenes. Beside the object extraction also the DSM and DTM can directly be used for simulation and monitoring of environmental issues. Examples are the simulation of floodings, building-volume and people estimation, simulation of noise from roads, wave-propagation for cellphones, wind and light for estimating renewable energy sources, 3D change detection, earthquake preparedness and crisis relief, urban development and sprawl of informal settlements and much more. Also outside of urban areas volume information brings literally a new dimension to earth oberservation tasks like the volume estimations of forests and illegal logging, volume of (illegal) open pit mining activities, estimation of flooding or tsunami risks, dike planning, etc. In this paper we present the preprocessing from the original level-1 satellite data to digital surface models (DSMs), corresponding VHR ortho images and derived digital terrain models (DTMs). From these components we present how a monitoring and decision fusion based 3D change detection can be realized by using different acquisitions. The results are analyzed and assessed to derive quality parameters for the presented method. Finally the usability of 3D information fusion from VHR satellite imagery is discussed and evaluated.
Finite Element Based Structural Damage Detection Using Artificial Boundary Conditions
2007-09-01
C. (2005). Elementary Linear Algebra . New York: John Wiley and Sons. Avitable, Peter (2001, January) Experimental Modal Analysis, A Simple Non...variables under consideration. 3 Frequency sensitivities are the basis for a linear approximation to compute the change in the natural frequencies of a...THEORY The general problem statement for a non- linear constrained optimization problem is: To minimize ( )f x Objective Function Subject to
NASA Astrophysics Data System (ADS)
Vamshi, Gasiganti T.; Martha, Tapas R.; Vinod Kumar, K.
2016-05-01
Identification of impact craters is a primary requirement to study past geological processes such as impact history. They are also used as proxies for measuring relative ages of various planetary or satellite bodies and help to understand the evolution of planetary surfaces. In this paper, we present a new method using object-based image analysis (OBIA) technique to detect impact craters of wide range of sizes from topographic data. Multiresolution image segmentation of digital terrain models (DTMs) available from the NASA's LRO mission was carried out to create objects. Subsequently, objects were classified into impact craters using shape and morphometric criteria resulting in 95% detection accuracy. The methodology developed in a training area in parts of Mare Imbrium in the form of a knowledge-based ruleset when applied in another area, detected impact craters with 90% accuracy. The minimum and maximum sizes (diameters) of impact craters detected in parts of Mare Imbrium by our method are 29 m and 1.5 km, respectively. Diameters of automatically detected impact craters show good correlation (R2 > 0.85) with the diameters of manually detected impact craters.
NASA Astrophysics Data System (ADS)
Tang, Xiangyang; Yang, Yi; Tang, Shaojie
2013-03-01
Under the framework of model observer with signal and background exactly known (SKE/BKE), we investigate the detectability of differential phase contrast CT compared with that of the conventional attenuation-based CT. Using the channelized Hotelling observer and the radially symmetric difference-of-Gaussians channel template , we investigate the detectability index and its variation over the dimension of object and detector cells. The preliminary data show that the differential phase contrast CT outperforms the conventional attenuation-based CT significantly in the detectability index while both the object to be detected and the cell of detector used for data acquisition are relatively small. However, the differential phase contrast CT's dominance in the detectability index diminishes with increasing dimension of either object or detector cell, and virtually disappears while the dimension of object or detector cell approaches a threshold, respectively. It is hoped that the preliminary data reported in this paper may provide insightful understanding of the differential phase contrast CT's characteristic in the detectability index and its comparison with that of the conventional attenuation-based CT.
Vision-based object detection and recognition system for intelligent vehicles
NASA Astrophysics Data System (ADS)
Ran, Bin; Liu, Henry X.; Martono, Wilfung
1999-01-01
Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.
An Underwater Target Detection System for Electro-Optical Imagery Data
2010-06-01
detection and segmentation of underwater mine-like objects in the EO images captured with a CCD-based image sensor. The main focus of this research is to...develop a robust detection algorithm that can be used to detect low contrast and partial underwater objects from the EO imagery with low false alarm rate...underwater target detection I. INTRODUCTION Automatic detection and recognition of underwater objects from EO imagery poses a serious challenge due to poor
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Shestakov, Ivan L.; Blednov, Roman G.
2016-09-01
One of urgent security problems is a detection of objects placed inside the human body. Obviously, for safety reasons one cannot use X-rays for such object detection widely and often. Three years ago, we have demonstrated principal possibility to see a temperature trace, induced by food eating or water drinking, on the human body skin by using a passive THz camera. However, this camera is very expensive. Therefore, for practice it will be very convenient if one can use the IR camera for this purpose. In contrast to passive THz camera using, the IR camera does not allow to see the object under clothing, if an image, produced by this camera, is used directly. Of course, this is a big disadvantage for a security problem solution based on the IR camera using. To overcome this disadvantage we develop novel approach for computer processing of IR camera images. It allows us to increase a temperature resolution of IR camera as well as increasing of human year effective susceptibility. As a consequence of this, a possibility for seeing of a human body temperature changing through clothing appears. We analyze IR images of a person, which drinks water and eats chocolate. We follow a temperature trace on human body skin, caused by changing of temperature inside the human body. Some experiments were made with measurements of a body temperature covered by T-shirt. Shown results are very important for the detection of forbidden objects, cancelled inside the human body, by using non-destructive control without using X-rays.
Wagner, Tyler; Irwin, Brian J.; James R. Bence,; Daniel B. Hayes,
2016-01-01
Monitoring to detect temporal trends in biological and habitat indices is a critical component of fisheries management. Thus, it is important that management objectives are linked to monitoring objectives. This linkage requires a definition of what constitutes a management-relevant “temporal trend.” It is also important to develop expectations for the amount of time required to detect a trend (i.e., statistical power) and for choosing an appropriate statistical model for analysis. We provide an overview of temporal trends commonly encountered in fisheries management, review published studies that evaluated statistical power of long-term trend detection, and illustrate dynamic linear models in a Bayesian context, as an additional analytical approach focused on shorter term change. We show that monitoring programs generally have low statistical power for detecting linear temporal trends and argue that often management should be focused on different definitions of trends, some of which can be better addressed by alternative analytical approaches.
Multisensor Fusion for Change Detection
NASA Astrophysics Data System (ADS)
Schenk, T.; Csatho, B.
2005-12-01
Combining sensors that record different properties of a 3-D scene leads to complementary and redundant information. If fused properly, a more robust and complete scene description becomes available. Moreover, fusion facilitates automatic procedures for object reconstruction and modeling. For example, aerial imaging sensors, hyperspectral scanning systems, and airborne laser scanning systems generate complementary data. We describe how data from these sensors can be fused for such diverse applications as mapping surface erosion and landslides, reconstructing urban scenes, monitoring urban land use and urban sprawl, and deriving velocities and surface changes of glaciers and ice sheets. An absolute prerequisite for successful fusion is a rigorous co-registration of the sensors involved. We establish a common 3-D reference frame by using sensor invariant features. Such features are caused by the same object space phenomena and are extracted in multiple steps from the individual sensors. After extracting, segmenting and grouping the features into more abstract entities, we discuss ways on how to automatically establish correspondences. This is followed by a brief description of rigorous mathematical models suitable to deal with linear and area features. In contrast to traditional, point-based registration methods, lineal and areal features lend themselves to a more robust and more accurate registration. More important, the chances to automate the registration process increases significantly. The result of the co-registration of the sensors is a unique transformation between the individual sensors and the object space. This makes spatial reasoning of extracted information more versatile; reasoning can be performed in sensor space or in 3-D space where domain knowledge about features and objects constrains reasoning processes, reduces the search space, and helps to make the problem well-posed. We demonstrate the feasibility of the proposed multisensor fusion approach with detecting surface elevation changes on the Byrd Glacier, Antarctica, with aerial imagery from 1980s and ICESat laser altimetry data from 2003-05. Change detection from such disparate data sets is an intricate fusion problem, beginning with sensor alignment, and on to reasoning with spatial information as to where changes occurred and to what extent.
DeepID-Net: Deformable Deep Convolutional Neural Networks for Object Detection.
Ouyang, Wanli; Zeng, Xingyu; Wang, Xiaogang; Qiu, Shi; Luo, Ping; Tian, Yonglong; Li, Hongsheng; Yang, Shuo; Wang, Zhe; Li, Hongyang; Loy, Chen Change; Wang, Kun; Yan, Junjie; Tang, Xiaoou
2016-07-07
In this paper, we propose deformable deep convolutional neural networks for generic object detection. This new deep learning object detection framework has innovations in multiple aspects. In the proposed new deep architecture, a new deformation constrained pooling (def-pooling) layer models the deformation of object parts with geometric constraint and penalty. A new pre-training strategy is proposed to learn feature representations more suitable for the object detection task and with good generalization capability. By changing the net structures, training strategies, adding and removing some key components in the detection pipeline, a set of models with large diversity are obtained, which significantly improves the effectiveness of model averaging. The proposed approach improves the mean averaged precision obtained by RCNN [16], which was the state-of-the-art, from 31% to 50.3% on the ILSVRC2014 detection test set. It also outperforms the winner of ILSVRC2014, GoogLeNet, by 6.1%. Detailed component-wise analysis is also provided through extensive experimental evaluation, which provides a global view for people to understand the deep learning object detection pipeline.
The Role of Attention in the Maintenance of Feature Bindings in Visual Short-term Memory
ERIC Educational Resources Information Center
Johnson, Jeffrey S.; Hollingworth, Andrew; Luck, Steven J.
2008-01-01
This study examined the role of attention in maintaining feature bindings in visual short-term memory. In a change-detection paradigm, participants attempted to detect changes in the colors and orientations of multiple objects; the changes consisted of new feature values in a feature-memory condition and changes in how existing feature values were…
Neural-net-based image matching
NASA Astrophysics Data System (ADS)
Jerebko, Anna K.; Barabanov, Nikita E.; Luciv, Vadim R.; Allinson, Nigel M.
2000-04-01
The paper describes a neural-based method for matching spatially distorted image sets. The matching of partially overlapping images is important in many applications-- integrating information from images formed from different spectral ranges, detecting changes in a scene and identifying objects of differing orientations and sizes. Our approach consists of extracting contour features from both images, describing the contour curves as sets of line segments, comparing these sets, determining the corresponding curves and their common reference points, calculating the image-to-image co-ordinate transformation parameters on the basis of the most successful variant of the derived curve relationships. The main steps are performed by custom neural networks. The algorithms describe in this paper have been successfully tested on a large set of images of the same terrain taken in different spectral ranges, at different seasons and rotated by various angles. In general, this experimental verification indicates that the proposed method for image fusion allows the robust detection of similar objects in noisy, distorted scenes where traditional approaches often fail.
Change detection on LOD 2 building models with very high resolution spaceborne stereo imagery
NASA Astrophysics Data System (ADS)
Qin, Rongjun
2014-10-01
Due to the fast development of the urban environment, the need for efficient maintenance and updating of 3D building models is ever increasing. Change detection is an essential step to spot the changed area for data (map/3D models) updating and urban monitoring. Traditional methods based on 2D images are no longer suitable for change detection in building scale, owing to the increased spectral variability of the building roofs and larger perspective distortion of the very high resolution (VHR) imagery. Change detection in 3D is increasingly being investigated using airborne laser scanning data or matched Digital Surface Models (DSM), but rare study has been conducted regarding to change detection on 3D city models with VHR images, which is more informative but meanwhile more complicated. This is due to the fact that the 3D models are abstracted geometric representation of the urban reality, while the VHR images record everything. In this paper, a novel method is proposed to detect changes directly on LOD (Level of Detail) 2 building models with VHR spaceborne stereo images from a different date, with particular focus on addressing the special characteristics of the 3D models. In the first step, the 3D building models are projected onto a raster grid, encoded with building object, terrain object, and planar faces. The DSM is extracted from the stereo imagery by hierarchical semi-global matching (SGM). In the second step, a multi-channel change indicator is extracted between the 3D models and stereo images, considering the inherent geometric consistency (IGC), height difference, and texture similarity for each planar face. Each channel of the indicator is then clustered with the Self-organizing Map (SOM), with "change", "non-change" and "uncertain change" status labeled through a voting strategy. The "uncertain changes" are then determined with a Markov Random Field (MRF) analysis considering the geometric relationship between faces. In the third step, buildings are extracted combining the multispectral images and the DSM by morphological operators, and the new buildings are determined by excluding the verified unchanged buildings from the second step. Both the synthetic experiment with Worldview-2 stereo imagery and the real experiment with IKONOS stereo imagery are carried out to demonstrate the effectiveness of the proposed method. It is shown that the proposed method can be applied as an effective way to monitoring the building changes, as well as updating 3D models from one epoch to the other.
A Study of Dim Object Detection for the Space Surveillance Telescope
2013-03-21
ENG-13-M-32 Abstract Current methods of dim object detection for space surveillance make use of a Gaussian log-likelihood-ratio-test-based...quantitatively comparing the efficacy of two methods for dim object detection , termed in this paper the point detector and the correlator, both of which rely... applications . It is used in national defense for detecting satellites. It is used to detecting space debris, which threatens both civilian and
Evaluation of speaker de-identification based on voice gender and age conversion
NASA Astrophysics Data System (ADS)
Přibil, Jiří; Přibilová, Anna; Matoušek, Jindřich
2018-03-01
Two basic tasks are covered in this paper. The first one consists in the design and practical testing of a new method for voice de-identification that changes the apparent age and/or gender of a speaker by multi-segmental frequency scale transformation combined with prosody modification. The second task is aimed at verification of applicability of a classifier based on Gaussian mixture models (GMM) to detect the original Czech and Slovak speakers after applied voice deidentification. The performed experiments confirm functionality of the developed gender and age conversion for all selected types of de-identification which can be objectively evaluated by the GMM-based open-set classifier. The original speaker detection accuracy was compared also for sentences uttered by German and English speakers showing language independence of the proposed method.
Road detection and buried object detection in elevated EO/IR imagery
NASA Astrophysics Data System (ADS)
Kennedy, Levi; Kolba, Mark P.; Walters, Joshua R.
2012-06-01
To assist the warfighter in visually identifying potentially dangerous roadside objects, the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has developed an elevated video sensor system testbed for data collection. This system provides color and mid-wave infrared (MWIR) imagery. Signal Innovations Group (SIG) has developed an automated processing capability that detects the road within the sensor field of view and identifies potentially threatening buried objects within the detected road. The road detection algorithm leverages system metadata to project the collected imagery onto a flat ground plane, allowing for more accurate detection of the road as well as the direct specification of realistic physical constraints in the shape of the detected road. Once the road has been detected in an image frame, a buried object detection algorithm is applied to search for threatening objects within the detected road space. The buried object detection algorithm leverages textural and pixel intensity-based features to detect potential anomalies and then classifies them as threatening or non-threatening objects. Both the road detection and the buried object detection algorithms have been developed to facilitate their implementation in real-time in the NVESD system.
Chládek, J; Brázdil, M; Halámek, J; Plešinger, F; Jurák, P
2013-01-01
We present an off-line analysis procedure for exploring brain activity recorded from intra-cerebral electroencephalographic data (SEEG). The objective is to determine the statistical differences between different types of stimulations in the time-frequency domain. The procedure is based on computing relative signal power change and subsequent statistical analysis. An example of characteristic statistically significant event-related de/synchronization (ERD/ERS) detected across different frequency bands following different oddball stimuli is presented. The method is used for off-line functional classification of different brain areas.
1990-09-01
I C4 -44 0 i r Uo IP Stop I Designation of Site-Specific Managerial Needs and Objectives Step 2 Identification of Physical and Chemical Parameters...improved as experience dictates. Emphasis is placed on the establishment of concise objectives and hypotheses, the use of multidisciplinary approaches to...resulting monitoring can thus focus on the detection of changes in specific conditions rather than identifying any or all detectable changes. A monitoring
Evaluation of a radar-based proximity warning system for off-highway dump trucks.
Ruff, Todd
2006-01-01
A radar-based proximity warning system was evaluated by researchers at the Spokane Research Laboratory of the National Institute for Occupational Safety and Health to determine if the system would be effective in detecting objects in the blind spots of an off-highway dump truck. An average of five fatalities occur each year in surface mines as a result of an equipment operator not being aware of a smaller vehicle, person or change in terrain near the equipment. Sensor technology that can detect such obstacles and that also is designed for surface mining applications is rare. Researchers worked closely with the radar system manufacturer to test and modify the system on large, off-highway dump trucks at a surface mine over a period of 2 years. The final system was thoroughly evaluated by recording video images from a camera on the rear of the truck and by recording all alarms from the rear-mounted radar. Data show that the system reliably detected small vehicles, berms, people and other equipment. However, alarms from objects that posed no immediate danger were common, supporting the assertion that sensor-based systems for proximity warning should be used in combination with other devices, such as cameras, that would allow the operator to check the source of any alarm.
Faint Object Detection in Multi-Epoch Observations via Catalog Data Fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budavári, Tamás; Szalay, Alexander S.; Loredo, Thomas J.
Astronomy in the time-domain era faces several new challenges. One of them is the efficient use of observations obtained at multiple epochs. The work presented here addresses faint object detection and describes an incremental strategy for separating real objects from artifacts in ongoing surveys. The idea is to produce low-threshold single-epoch catalogs and to accumulate information across epochs. This is in contrast to more conventional strategies based on co-added or stacked images. We adopt a Bayesian approach, addressing object detection by calculating the marginal likelihoods for hypotheses asserting that there is no object or one object in a small imagemore » patch containing at most one cataloged source at each epoch. The object-present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object; the no-object hypothesis interprets candidate sources as spurious, arising from noise peaks. We study the detection probability for constant-flux objects in a Gaussian noise setting, comparing results based on single and stacked exposures to results based on a series of single-epoch catalog summaries. Our procedure amounts to generalized cross-matching: it is the product of a factor accounting for the matching of the estimated fluxes of the candidate sources and a factor accounting for the matching of their estimated directions. We find that probabilistic fusion of multi-epoch catalogs can detect sources with similar sensitivity and selectivity compared to stacking. The probabilistic cross-matching framework underlying our approach plays an important role in maintaining detection sensitivity and points toward generalizations that could accommodate variability and complex object structure.« less
Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus
NASA Astrophysics Data System (ADS)
Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.
2014-09-01
There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is used to assess the performance of the algorithm. In the second application, a visible imager operated in sidereal mode observes geostationary objects as moving, stars as fixed except for field rotation, and non-geostationary objects as drifting. RANSAC-MT is used to detect the drifter. In this set of data, the drifting space object was detected at a distance of 13800 km. The AFRL/RH set of data, collected in the stare mode, contained the signature of two geostationary satellites. The signature of a moving object was simulated and added to the sequence of frames to determine the sensitivity in magnitude. The performance compares well with the more intensive TBD algorithms reported in the literature.
Edge detection based on computational ghost imaging with structured illuminations
NASA Astrophysics Data System (ADS)
Yuan, Sheng; Xiang, Dong; Liu, Xuemei; Zhou, Xin; Bing, Pibin
2018-03-01
Edge detection is one of the most important tools to recognize the features of an object. In this paper, we propose an optical edge detection method based on computational ghost imaging (CGI) with structured illuminations which are generated by an interference system. The structured intensity patterns are designed to make the edge of an object be directly imaged from detected data in CGI. This edge detection method can extract the boundaries for both binary and grayscale objects in any direction at one time. We also numerically test the influence of distance deviations in the interference system on edge extraction, i.e., the tolerance of the optical edge detection system to distance deviation. Hopefully, it may provide a guideline for scholars to build an experimental system.
Shape and texture fused recognition of flying targets
NASA Astrophysics Data System (ADS)
Kovács, Levente; Utasi, Ákos; Kovács, Andrea; Szirányi, Tamás
2011-06-01
This paper presents visual detection and recognition of flying targets (e.g. planes, missiles) based on automatically extracted shape and object texture information, for application areas like alerting, recognition and tracking. Targets are extracted based on robust background modeling and a novel contour extraction approach, and object recognition is done by comparisons to shape and texture based query results on a previously gathered real life object dataset. Application areas involve passive defense scenarios, including automatic object detection and tracking with cheap commodity hardware components (CPU, camera and GPS).
A Novel Event-Based Incipient Slip Detection Using Dynamic Active-Pixel Vision Sensor (DAVIS)
Rigi, Amin
2018-01-01
In this paper, a novel approach to detect incipient slip based on the contact area between a transparent silicone medium and different objects using a neuromorphic event-based vision sensor (DAVIS) is proposed. Event-based algorithms are developed to detect incipient slip, slip, stress distribution and object vibration. Thirty-seven experiments were performed on five objects with different sizes, shapes, materials and weights to compare precision and response time of the proposed approach. The proposed approach is validated by using a high speed constitutional camera (1000 FPS). The results indicate that the sensor can detect incipient slippage with an average of 44.1 ms latency in unstructured environment for various objects. It is worth mentioning that the experiments were conducted in an uncontrolled experimental environment, therefore adding high noise levels that affected results significantly. However, eleven of the experiments had a detection latency below 10 ms which shows the capability of this method. The results are very promising and show a high potential of the sensor being used for manipulation applications especially in dynamic environments. PMID:29364190
Quantifying Standing Dead Tree Volume and Structural Loss with Voxelized Terrestrial Lidar Data
NASA Astrophysics Data System (ADS)
Popescu, S. C.; Putman, E.
2017-12-01
Standing dead trees (SDTs) are an important forest component and impact a variety of ecosystem processes, yet the carbon pool dynamics of SDTs are poorly constrained in terrestrial carbon cycling models. The ability to model wood decay and carbon cycling in relation to detectable changes in tree structure and volume over time would greatly improve such models. The overall objective of this study was to provide automated aboveground volume estimates of SDTs and automated procedures to detect, quantify, and characterize structural losses over time with terrestrial lidar data. The specific objectives of this study were: 1) develop an automated SDT volume estimation algorithm providing accurate volume estimates for trees scanned in dense forests; 2) develop an automated change detection methodology to accurately detect and quantify SDT structural loss between subsequent terrestrial lidar observations; and 3) characterize the structural loss rates of pine and oak SDTs in southeastern Texas. A voxel-based volume estimation algorithm, "TreeVolX", was developed and incorporates several methods designed to robustly process point clouds of varying quality levels. The algorithm operates on horizontal voxel slices by segmenting the slice into distinct branch or stem sections then applying an adaptive contour interpolation and interior filling process to create solid reconstructed tree models (RTMs). TreeVolX estimated large and small branch volume with an RMSE of 7.3% and 13.8%, respectively. A voxel-based change detection methodology was developed to accurately detect and quantify structural losses and incorporated several methods to mitigate the challenges presented by shifting tree and branch positions as SDT decay progresses. The volume and structural loss of 29 SDTs, composed of Pinus taeda and Quercus stellata, were successfully estimated using multitemporal terrestrial lidar observations over elapsed times ranging from 71 - 753 days. Pine and oak structural loss rates were characterized by estimating the amount of volumetric loss occurring in 20 equal-interval height bins of each SDT. Results showed that large pine snags exhibited more rapid structural loss in comparison to medium-sized oak snags in this study.
LABRADOR: a learning autonomous behavior-based robot for adaptive detection and object retrieval
NASA Astrophysics Data System (ADS)
Yamauchi, Brian; Moseley, Mark; Brookshire, Jonathan
2013-01-01
As part of the TARDEC-funded CANINE (Cooperative Autonomous Navigation in a Networked Environment) Program, iRobot developed LABRADOR (Learning Autonomous Behavior-based Robot for Adaptive Detection and Object Retrieval). LABRADOR was based on the rugged, man-portable, iRobot PackBot unmanned ground vehicle (UGV) equipped with an explosives ordnance disposal (EOD) manipulator arm and a custom gripper. For LABRADOR, we developed a vision-based object learning and recognition system that combined a TLD (track-learn-detect) filter based on object shape features with a color-histogram-based object detector. Our vision system was able to learn in real-time to recognize objects presented to the robot. We also implemented a waypoint navigation system based on fused GPS, IMU (inertial measurement unit), and odometry data. We used this navigation capability to implement autonomous behaviors capable of searching a specified area using a variety of robust coverage strategies - including outward spiral, random bounce, random waypoint, and perimeter following behaviors. While the full system was not integrated in time to compete in the CANINE competition event, we developed useful perception, navigation, and behavior capabilities that may be applied to future autonomous robot systems.
NASA Astrophysics Data System (ADS)
Zimmer, P.; McGraw, J. T.; Ackermann, M. R.
There is considerable interest in the capability to discover and monitor small objects (d 20cm) in geosynchronous (GEO) and near-GEO orbital regimes using small, ground-based optical telescopes (D < 0.5m). The threat of such objects is clear. Small telescopes have an unrivaled cost advantage and, under ideal lighting and sky conditions, have the capability of detecting faint objects. This combination of conditions, however, is relatively rare, making routine and persistent surveillance more challenging. In a truly geostationary orbit, a small object is easy to detect because its apparent rate of motion is nearly zero for a ground-based observer, and signal accumulation occurs as it would for more traditional sidereal-tracked astronomical observations. In this regime, though, small objects are not expected to be in controlled or predictable orbits, thus a range of inclinations and eccentricities is possible. This results in a range of apparent angular rates and directions that must be surveilled. This firmly establishes this task as uncued or blind surveillance. Detections in this case are subject to what is commonly called “trailing loss,” where the signal from the object does not accumulate in a fixed detection element, resulting in far lower sensitivity than for a similar object optimally tracked. We review some of the limits of detecting these objects under less than ideal observing conditions, subject further to the current limitations based on technological and operational realities. We demonstrate progress towards this goal using telescopes much smaller than normally considered viable for this task using novel detection and analysis techniques.
A simulation study of detection of weapon of mass destruction based on radar
NASA Astrophysics Data System (ADS)
Sharifahmadian, E.; Choi, Y.; Latifi, S.
2013-05-01
Typical systems used for detection of Weapon of Mass Destruction (WMD) are based on sensing objects using gamma rays or neutrons. Nonetheless, depending on environmental conditions, current methods for detecting fissile materials have limited distance of effectiveness. Moreover, radiation related to gamma- rays can be easily shielded. Here, detecting concealed WMD from a distance is simulated and studied based on radar, especially WideBand (WB) technology. The WB-based method capitalizes on the fact that electromagnetic waves penetrate through different materials at different rates. While low-frequency waves can pass through objects more easily, high-frequency waves have a higher rate of absorption by objects, making the object recognition easier. Measuring the penetration depth allows one to identify the sensed material. During simulation, radar waves and propagation area including free space, and objects in the scene are modeled. In fact, each material is modeled as a layer with a certain thickness. At start of simulation, a modeled radar wave is radiated toward the layers. At the receiver side, based on the received signals from every layer, each layer can be identified. When an electromagnetic wave passes through an object, the wave's power will be subject to a certain level of attenuation depending of the object's characteristics. Simulation is performed using radar signals with different frequencies (ranges MHz-GHz) and powers to identify different layers.
Tracking Algorithm of Multiple Pedestrians Based on Particle Filters in Video Sequences
Liu, Yun; Wang, Chuanxu; Zhang, Shujun; Cui, Xuehong
2016-01-01
Pedestrian tracking is a critical problem in the field of computer vision. Particle filters have been proven to be very useful in pedestrian tracking for nonlinear and non-Gaussian estimation problems. However, pedestrian tracking in complex environment is still facing many problems due to changes of pedestrian postures and scale, moving background, mutual occlusion, and presence of pedestrian. To surmount these difficulties, this paper presents tracking algorithm of multiple pedestrians based on particle filters in video sequences. The algorithm acquires confidence value of the object and the background through extracting a priori knowledge thus to achieve multipedestrian detection; it adopts color and texture features into particle filter to get better observation results and then automatically adjusts weight value of each feature according to current tracking environment. During the process of tracking, the algorithm processes severe occlusion condition to prevent drift and loss phenomena caused by object occlusion and associates detection results with particle state to propose discriminated method for object disappearance and emergence thus to achieve robust tracking of multiple pedestrians. Experimental verification and analysis in video sequences demonstrate that proposed algorithm improves the tracking performance and has better tracking results. PMID:27847514
The Right Hemisphere Advantage in Visual Change Detection Depends on Temporal Factors
ERIC Educational Resources Information Center
Spotorno, Sara; Faure, Sylvane
2011-01-01
What accounts for the Right Hemisphere (RH) functional superiority in visual change detection? An original task which combines one-shot and divided visual field paradigms allowed us to direct change information initially to the RH or the Left Hemisphere (LH) by deleting, respectively, an object included in the left or right half of a scene…
Automatic detection and classification of obstacles with applications in autonomous mobile robots
NASA Astrophysics Data System (ADS)
Ponomaryov, Volodymyr I.; Rosas-Miranda, Dario I.
2016-04-01
Hardware implementation of an automatic detection and classification of objects that can represent an obstacle for an autonomous mobile robot using stereo vision algorithms is presented. We propose and evaluate a new method to detect and classify objects for a mobile robot in outdoor conditions. This method is divided in two parts, the first one is the object detection step based on the distance from the objects to the camera and a BLOB analysis. The second part is the classification step that is based on visuals primitives and a SVM classifier. The proposed method is performed in GPU in order to reduce the processing time values. This is performed with help of hardware based on multi-core processors and GPU platform, using a NVIDIA R GeForce R GT640 graphic card and Matlab over a PC with Windows 10.
Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.
2008-01-01
Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.
Whisker Contact Detection of Rodents Based on Slow and Fast Mechanical Inputs
Claverie, Laure N.; Boubenec, Yves; Debrégeas, Georges; Prevost, Alexis M.; Wandersman, Elie
2017-01-01
Rodents use their whiskers to locate nearby objects with an extreme precision. To perform such tasks, they need to detect whisker/object contacts with a high temporal accuracy. This contact detection is conveyed by classes of mechanoreceptors whose neural activity is sensitive to either slow or fast time varying mechanical stresses acting at the base of the whiskers. We developed a biomimetic approach to separate and characterize slow quasi-static and fast vibrational stress signals acting on a whisker base in realistic exploratory phases, using experiments on both real and artificial whiskers. Both slow and fast mechanical inputs are successfully captured using a mechanical model of the whisker. We present and discuss consequences of the whisking process in purely mechanical terms and hypothesize that free whisking in air sets a mechanical threshold for contact detection. The time resolution and robustness of the contact detection strategies based on either slow or fast stress signals are determined. Contact detection based on the vibrational signal is faster and more robust to exploratory conditions than the slow quasi-static component, although both slow/fast components allow localizing the object. PMID:28119582
Ventura, Joseph; Subotnik, Kenneth L; Ered, Arielle; Hellemann, Gerhard S; Nuechterlein, Keith H
2016-04-01
Progress has been made in developing interview-based measures for the assessment of cognitive functioning, such as the Cognitive Assessment Interview (CAI), as co-primary measures that compliment objective neurocognitive assessments and daily functioning. However, a few questions remain, including whether the relationships with objective cognitive measures and daily functioning are high enough to justify the CAI as an co-primary measure and whether patient-only assessments are valid. Participants were first-episode schizophrenia patients (n=60) and demographically-similar healthy controls (n=35), chronic schizophrenia patients (n=38) and demographically similar healthy controls (n=19). Participants were assessed at baseline with an interview-based measure of cognitive functioning (CAI), a test of objective cognitive functioning, functional capacity, and role functioning at baseline, and in the first episode patients again 6 months later (n=28). CAI ratings were correlated with objective cognitive functioning, functional capacity, and functional outcomes in first-episode schizophrenia patients at similar magnitudes as in chronic patients. Comparisons of first-episode and chronic patients with healthy controls indicated that the CAI sensitively detected deficits in schizophrenia. The relationship of CAI Patient-Only ratings with objective cognitive functioning, functional capacity, and daily functioning were comparable to CAI Rater scores that included informant information. These results confirm in an independent sample the relationship of the CAI ratings with objectively measured cognition, functional capacity, and role functioning. Comparison of schizophrenia patients with healthy controls further validates the CAI as an co-primary measure of cognitive deficits. Also, CAI change scores were strongly related to objective cognitive change indicating sensitivity to change. Copyright © 2016 Elsevier B.V. All rights reserved.
Binding Objects to Locations: The Relationship between Object Files and Visual Working Memory
ERIC Educational Resources Information Center
Hollingworth, Andrew; Rasmussen, Ian P.
2010-01-01
The relationship between object files and visual working memory (VWM) was investigated in a new paradigm combining features of traditional VWM experiments (color change detection) and object-file experiments (memory for the properties of moving objects). Object-file theory was found to account for a key component of object-position binding in VWM:…
Global Contrast Based Salient Region Detection.
Cheng, Ming-Ming; Mitra, Niloy J; Huang, Xiaolei; Torr, Philip H S; Hu, Shi-Min
2015-03-01
Automatic estimation of salient object regions across images, without any prior assumption or knowledge of the contents of the corresponding scenes, enhances many computer vision and computer graphics applications. We introduce a regional contrast based salient object detection algorithm, which simultaneously evaluates global contrast differences and spatial weighted coherence scores. The proposed algorithm is simple, efficient, naturally multi-scale, and produces full-resolution, high-quality saliency maps. These saliency maps are further used to initialize a novel iterative version of GrabCut, namely SaliencyCut, for high quality unsupervised salient object segmentation. We extensively evaluated our algorithm using traditional salient object detection datasets, as well as a more challenging Internet image dataset. Our experimental results demonstrate that our algorithm consistently outperforms 15 existing salient object detection and segmentation methods, yielding higher precision and better recall rates. We also show that our algorithm can be used to efficiently extract salient object masks from Internet images, enabling effective sketch-based image retrieval (SBIR) via simple shape comparisons. Despite such noisy internet images, where the saliency regions are ambiguous, our saliency guided image retrieval achieves a superior retrieval rate compared with state-of-the-art SBIR methods, and additionally provides important target object region information.
Radar based autonomous sensor module
NASA Astrophysics Data System (ADS)
Styles, Tim
2016-10-01
Most surveillance systems combine camera sensors with other detection sensors that trigger an alert to a human operator when an object is detected. The detection sensors typically require careful installation and configuration for each application and there is a significant burden on the operator to react to each alert by viewing camera video feeds. A demonstration system known as Sensing for Asset Protection with Integrated Electronic Networked Technology (SAPIENT) has been developed to address these issues using Autonomous Sensor Modules (ASM) and a central High Level Decision Making Module (HLDMM) that can fuse the detections from multiple sensors. This paper describes the 24 GHz radar based ASM, which provides an all-weather, low power and license exempt solution to the problem of wide area surveillance. The radar module autonomously configures itself in response to tasks provided by the HLDMM, steering the transmit beam and setting range resolution and power levels for optimum performance. The results show the detection and classification performance for pedestrians and vehicles in an area of interest, which can be modified by the HLDMM without physical adjustment. The module uses range-Doppler processing for reliable detection of moving objects and combines Radar Cross Section and micro-Doppler characteristics for object classification. Objects are classified as pedestrian or vehicle, with vehicle sub classes based on size. Detections are reported only if the object is detected in a task coverage area and it is classified as an object of interest. The system was shown in a perimeter protection scenario using multiple radar ASMs, laser scanners, thermal cameras and visible band cameras. This combination of sensors enabled the HLDMM to generate reliable alerts with improved discrimination of objects and behaviours of interest.
Object Detection in Natural Backgrounds Predicted by Discrimination Performance and Models
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.; Watson, A. B.; Rohaly, A. M.; Null, Cynthia H. (Technical Monitor)
1995-01-01
In object detection, an observer looks for an object class member in a set of backgrounds. In discrimination, an observer tries to distinguish two images. Discrimination models predict the probability that an observer detects a difference between two images. We compare object detection and image discrimination with the same stimuli by: (1) making stimulus pairs of the same background with and without the target object and (2) either giving many consecutive trials with the same background (discrimination) or intermixing the stimuli (object detection). Six images of a vehicle in a natural setting were altered to remove the vehicle and mixed with the original image in various proportions. Detection observers rated the images for vehicle presence. Discrimination observers rated the images for any difference from the background image. Estimated detectabilities of the vehicles were found by maximizing the likelihood of a Thurstone category scaling model. The pattern of estimated detectabilities is similar for discrimination and object detection, and is accurately predicted by a Cortex Transform discrimination model. Predictions of a Contrast- Sensitivity- Function filter model and a Root-Mean-Square difference metric based on the digital image values are less accurate. The discrimination detectabilities averaged about twice those of object detection.
NASA Astrophysics Data System (ADS)
Gendron, Marlin Lee
During Mine Warfare (MIW) operations, MIW analysts perform change detection by visually comparing historical sidescan sonar imagery (SSI) collected by a sidescan sonar with recently collected SSI in an attempt to identify objects (which might be explosive mines) placed at sea since the last time the area was surveyed. This dissertation presents a data structure and three algorithms, developed by the author, that are part of an automated change detection and classification (ACDC) system. MIW analysts at the Naval Oceanographic Office, to reduce the amount of time to perform change detection, are currently using ACDC. The dissertation introductory chapter gives background information on change detection, ACDC, and describes how SSI is produced from raw sonar data. Chapter 2 presents the author's Geospatial Bitmap (GB) data structure, which is capable of storing information geographically and is utilized by the three algorithms. This chapter shows that a GB data structure used in a polygon-smoothing algorithm ran between 1.3--48.4x faster than a sparse matrix data structure. Chapter 3 describes the GB clustering algorithm, which is the author's repeatable, order-independent method for clustering. Results from tests performed in this chapter show that the time to cluster a set of points is not affected by the distribution or the order of the points. In Chapter 4, the author presents his real-time computer-aided detection (CAD) algorithm that automatically detects mine-like objects on the seafloor in SSI. The author ran his GB-based CAD algorithm on real SSI data, and results of these tests indicate that his real-time CAD algorithm performs comparably to or better than other non-real-time CAD algorithms. The author presents his computer-aided search (CAS) algorithm in Chapter 5. CAS helps MIW analysts locate mine-like features that are geospatially close to previously detected features. A comparison between the CAS and a great circle distance algorithm shows that the CAS performs geospatial searching 1.75x faster on large data sets. Finally, the concluding chapter of this dissertation gives important details on how the completed ACDC system will function, and discusses the author's future research to develop additional algorithms and data structures for ACDC.
NASA Astrophysics Data System (ADS)
Cheng, Gong; Han, Junwei; Zhou, Peicheng; Guo, Lei
2014-12-01
The rapid development of remote sensing technology has facilitated us the acquisition of remote sensing images with higher and higher spatial resolution, but how to automatically understand the image contents is still a big challenge. In this paper, we develop a practical and rotation-invariant framework for multi-class geospatial object detection and geographic image classification based on collection of part detectors (COPD). The COPD is composed of a set of representative and discriminative part detectors, where each part detector is a linear support vector machine (SVM) classifier used for the detection of objects or recurring spatial patterns within a certain range of orientation. Specifically, when performing multi-class geospatial object detection, we learn a set of seed-based part detectors where each part detector corresponds to a particular viewpoint of an object class, so the collection of them provides a solution for rotation-invariant detection of multi-class objects. When performing geographic image classification, we utilize a large number of pre-trained part detectors to discovery distinctive visual parts from images and use them as attributes to represent the images. Comprehensive evaluations on two remote sensing image databases and comparisons with some state-of-the-art approaches demonstrate the effectiveness and superiority of the developed framework.
[Visual representation of natural scenes in flicker changes].
Nakashima, Ryoichi; Yokosawa, Kazuhiko
2010-08-01
Coherence theory in scene perception (Rensink, 2002) assumes the retention of volatile object representations on which attention is not focused. On the other hand, visual memory theory in scene perception (Hollingworth & Henderson, 2002) assumes that robust object representations are retained. In this study, we hypothesized that the difference between these two theories is derived from the difference of the experimental tasks that they are based on. In order to verify this hypothesis, we examined the properties of visual representation by using a change detection and memory task in a flicker paradigm. We measured the representations when participants were instructed to search for a change in a scene, and compared them with the intentional memory representations. The visual representations were retained in visual long-term memory even in the flicker paradigm, and were as robust as the intentional memory representations. However, the results indicate that the representations are unavailable for explicitly localizing a scene change, but are available for answering the recognition test. This suggests that coherence theory and visual memory theory are compatible.
Ventral and Dorsal Visual Stream Contributions to the Perception of Object Shape and Object Location
Zachariou, Valentinos; Klatzky, Roberta; Behrmann, Marlene
2017-01-01
Growing evidence suggests that the functional specialization of the two cortical visual pathways may not be as distinct as originally proposed. Here, we explore possible contributions of the dorsal “where/how” visual stream to shape perception and, conversely, contributions of the ventral “what” visual stream to location perception in human adults. Participants performed a shape detection task and a location detection task while undergoing fMRI. For shape detection, comparable BOLD activation in the ventral and dorsal visual streams was observed, and the magnitude of this activation was correlated with behavioral performance. For location detection, cortical activation was significantly stronger in the dorsal than ventral visual pathway and did not correlate with the behavioral outcome. This asymmetry in cortical profile across tasks is particularly noteworthy given that the visual input was identical and that the tasks were matched for difficulty in performance. We confirmed the asymmetry in a subsequent psychophysical experiment in which participants detected changes in either object location or shape, while ignoring the other, task-irrelevant dimension. Detection of a location change was slowed by an irrelevant shape change matched for difficulty, but the reverse did not hold. We conclude that both ventral and dorsal visual streams contribute to shape perception, but that location processing appears to be essentially a function of the dorsal visual pathway. PMID:24001005
Dontje, Manon L; Dall, Philippa M; Skelton, Dawn A; Gill, Jason M R; Chastin, Sebastien F M
2018-01-01
Prolonged sedentary behaviour (SB) is associated with poor health. It is unclear which SB measure is most appropriate for interventions and population surveillance to measure and interpret change in behaviour in older adults. The aims of this study: to examine the relative and absolute reliability, Minimal Detectable Change (MDC) and responsiveness to change of subjective and objective methods of measuring SB in older adults and give recommendations of use for different study designs. SB of 18 older adults (aged 71 (IQR 7) years) was assessed using a systematic set of six subjective tools, derived from the TAxonomy of Self report Sedentary behaviour Tools (TASST), and one objective tool (activPAL3c), over 14 days. Relative reliability (Intra Class Correlation coefficients-ICC), absolute reliability (SEM), MDC, and the relative responsiveness (Cohen's d effect size (ES) and Guyatt's Responsiveness coefficient (GR)) were calculated for each of the different tools and ranked for different study designs. ICC ranged from 0.414 to 0.946, SEM from 36.03 to 137.01 min, MDC from 1.66 to 8.42 hours, ES from 0.017 to 0.259 and GR from 0.024 to 0.485. Objective average day per week measurement ranked as most responsive in a clinical practice setting, whereas a one day measurement ranked highest in quasi-experimental, longitudinal and controlled trial study designs. TV viewing-Previous Week Recall (PWR) ranked as most responsive subjective measure in all study designs. The reliability, Minimal Detectable Change and responsiveness to change of subjective and objective methods of measuring SB is context dependent. Although TV viewing-PWR is the more reliable and responsive subjective method in most situations, it may have limitations as a reliable measure of total SB. Results of this study can be used to guide choice of tools for detecting change in sedentary behaviour in older adults in the contexts of population surveillance, intervention evaluation and individual care.
A Fully Automated Method to Detect and Segment a Manufactured Object in an Underwater Color Image
NASA Astrophysics Data System (ADS)
Barat, Christian; Phlypo, Ronald
2010-12-01
We propose a fully automated active contours-based method for the detection and the segmentation of a moored manufactured object in an underwater image. Detection of objects in underwater images is difficult due to the variable lighting conditions and shadows on the object. The proposed technique is based on the information contained in the color maps and uses the visual attention method, combined with a statistical approach for the detection and an active contour for the segmentation of the object to overcome the above problems. In the classical active contour method the region descriptor is fixed and the convergence of the method depends on the initialization. With our approach, this dependence is overcome with an initialization using the visual attention results and a criterion to select the best region descriptor. This approach improves the convergence and the processing time while providing the advantages of a fully automated method.
Favazza, Christopher P.; Fetterly, Kenneth A.; Hangiandreou, Nicholas J.; Leng, Shuai; Schueler, Beth A.
2015-01-01
Abstract. Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks. PMID:26158086
Christiansen, Peter; Nielsen, Lars N; Steen, Kim A; Jørgensen, Rasmus N; Karstoft, Henrik
2016-11-11
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45-90 m) than RCNN. RCNN has a similar performance at a short range (0-30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).
Christiansen, Peter; Nielsen, Lars N.; Steen, Kim A.; Jørgensen, Rasmus N.; Karstoft, Henrik
2016-01-01
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m) than RCNN. RCNN has a similar performance at a short range (0–30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit). PMID:27845717
Beck, Cornelia; Ognibeni, Thilo; Neumann, Heiko
2008-01-01
Background Optic flow is an important cue for object detection. Humans are able to perceive objects in a scene using only kinetic boundaries, and can perform the task even when other shape cues are not provided. These kinetic boundaries are characterized by the presence of motion discontinuities in a local neighbourhood. In addition, temporal occlusions appear along the boundaries as the object in front covers the background and the objects that are spatially behind it. Methodology/Principal Findings From a technical point of view, the detection of motion boundaries for segmentation based on optic flow is a difficult task. This is due to the problem that flow detected along such boundaries is generally not reliable. We propose a model derived from mechanisms found in visual areas V1, MT, and MSTl of human and primate cortex that achieves robust detection along motion boundaries. It includes two separate mechanisms for both the detection of motion discontinuities and of occlusion regions based on how neurons respond to spatial and temporal contrast, respectively. The mechanisms are embedded in a biologically inspired architecture that integrates information of different model components of the visual processing due to feedback connections. In particular, mutual interactions between the detection of motion discontinuities and temporal occlusions allow a considerable improvement of the kinetic boundary detection. Conclusions/Significance A new model is proposed that uses optic flow cues to detect motion discontinuities and object occlusion. We suggest that by combining these results for motion discontinuities and object occlusion, object segmentation within the model can be improved. This idea could also be applied in other models for object segmentation. In addition, we discuss how this model is related to neurophysiological findings. The model was successfully tested both with artificial and real sequences including self and object motion. PMID:19043613
Color object detection using spatial-color joint probability functions.
Luo, Jiebo; Crandall, David
2006-06-01
Object detection in unconstrained images is an important image understanding problem with many potential applications. There has been little success in creating a single algorithm that can detect arbitrary objects in unconstrained images; instead, algorithms typically must be customized for each specific object. Consequently, it typically requires a large number of exemplars (for rigid objects) or a large amount of human intuition (for nonrigid objects) to develop a robust algorithm. We present a robust algorithm designed to detect a class of compound color objects given a single model image. A compound color object is defined as having a set of multiple, particular colors arranged spatially in a particular way, including flags, logos, cartoon characters, people in uniforms, etc. Our approach is based on a particular type of spatial-color joint probability function called the color edge co-occurrence histogram. In addition, our algorithm employs perceptual color naming to handle color variation, and prescreening to limit the search scope (i.e., size and location) for the object. Experimental results demonstrated that the proposed algorithm is insensitive to object rotation, scaling, partial occlusion, and folding, outperforming a closely related algorithm based on color co-occurrence histograms by a decisive margin.
Experimental application of simulation tools for evaluating UAV video change detection
NASA Astrophysics Data System (ADS)
Saur, Günter; Bartelsen, Jan
2015-10-01
Change detection is one of the most important tasks when unmanned aerial vehicles (UAV) are used for video reconnaissance and surveillance. In this paper, we address changes on short time scale, i.e. the observations are taken within time distances of a few hours. Each observation is a short video sequence corresponding to the near-nadir overflight of the UAV above the interesting area and the relevant changes are e.g. recently added or removed objects. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are versatile objects like trees and compression or transmission artifacts. To enable the usage of an automatic change detection within an interactive workflow of an UAV video exploitation system, an evaluation and assessment procedure has to be performed. Large video data sets which contain many relevant objects with varying scene background and altering influence parameters (e.g. image quality, sensor and flight parameters) including image metadata and ground truth data are necessary for a comprehensive evaluation. Since the acquisition of real video data is limited by cost and time constraints, from our point of view, the generation of synthetic data by simulation tools has to be considered. In this paper the processing chain of Saur et al. (2014) [1] and the interactive workflow for video change detection is described. We have selected the commercial simulation environment Virtual Battle Space 3 (VBS3) to generate synthetic data. For an experimental setup, an example scenario "road monitoring" has been defined and several video clips have been produced with varying flight and sensor parameters and varying objects in the scene. Image registration and change mask extraction, both components of the processing chain, are applied to corresponding frames of different video clips. For the selected examples, the images could be registered, the modelled changes could be extracted and the artifacts of the image rendering considered as noise (slight differences of heading angles, disparity of vegetation, 3D parallax) could be suppressed. We conclude that these image data could be considered to be realistic enough to serve as evaluation data for the selected processing components. Future work will extend the evaluation to other influence parameters and may include the human operator for mission planning and sensor control.
Image reconstruction of muon tomographic data using a density-based clustering method
NASA Astrophysics Data System (ADS)
Perry, Kimberly B.
Muons are subatomic particles capable of reaching the Earth's surface before decaying. When these particles collide with an object that has a high atomic number (Z), their path of travel changes substantially. Tracking muon movement through shielded containers can indicate what types of materials lie inside. This thesis proposes using a density-based clustering algorithm called OPTICS to perform image reconstructions using muon tomographic data. The results show that this method is capable of detecting high-Z materials quickly, and can also produce detailed reconstructions with large amounts of data.
Hopkins, Richard S; Cook, Robert L; Striley, Catherine W
2016-01-01
Background Traditional influenza surveillance relies on influenza-like illness (ILI) syndrome that is reported by health care providers. It primarily captures individuals who seek medical care and misses those who do not. Recently, Web-based data sources have been studied for application to public health surveillance, as there is a growing number of people who search, post, and tweet about their illnesses before seeking medical care. Existing research has shown some promise of using data from Google, Twitter, and Wikipedia to complement traditional surveillance for ILI. However, past studies have evaluated these Web-based sources individually or dually without comparing all 3 of them, and it would be beneficial to know which of the Web-based sources performs best in order to be considered to complement traditional methods. Objective The objective of this study is to comparatively analyze Google, Twitter, and Wikipedia by examining which best corresponds with Centers for Disease Control and Prevention (CDC) ILI data. It was hypothesized that Wikipedia will best correspond with CDC ILI data as previous research found it to be least influenced by high media coverage in comparison with Google and Twitter. Methods Publicly available, deidentified data were collected from the CDC, Google Flu Trends, HealthTweets, and Wikipedia for the 2012-2015 influenza seasons. Bayesian change point analysis was used to detect seasonal changes, or change points, in each of the data sources. Change points in Google, Twitter, and Wikipedia that occurred during the exact week, 1 preceding week, or 1 week after the CDC’s change points were compared with the CDC data as the gold standard. All analyses were conducted using the R package “bcp” version 4.0.0 in RStudio version 0.99.484 (RStudio Inc). In addition, sensitivity and positive predictive values (PPV) were calculated for Google, Twitter, and Wikipedia. Results During the 2012-2015 influenza seasons, a high sensitivity of 92% was found for Google, whereas the PPV for Google was 85%. A low sensitivity of 50% was calculated for Twitter; a low PPV of 43% was found for Twitter also. Wikipedia had the lowest sensitivity of 33% and lowest PPV of 40%. Conclusions Of the 3 Web-based sources, Google had the best combination of sensitivity and PPV in detecting Bayesian change points in influenza-related data streams. Findings demonstrated that change points in Google, Twitter, and Wikipedia data occasionally aligned well with change points captured in CDC ILI data, yet these sources did not detect all changes in CDC data and should be further studied and developed. PMID:27765731
NASA Technical Reports Server (NTRS)
Benkelman, Cody A.
1997-01-01
The project team has outlined several technical objectives which will allow the companies to improve on their current capabilities. These include modifications to the imaging system, enabling it to operate more cost effectively and with greater ease of use, automation of the post-processing software to mosaic and orthorectify the image scenes collected, and the addition of radiometric calibration to greatly aid in the ability to perform accurate change detection. Business objectives include fine tuning of the market plan plus specification of future product requirements, expansion of sales activities (including identification of necessary additional resources required to meet stated revenue objectives), development of a product distribution plan, and implementation of a world wide sales effort.
Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun
2017-01-01
The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device’s built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction. PMID:28837096
Rao, Jinmeng; Qiao, Yanjun; Ren, Fu; Wang, Junxing; Du, Qingyun
2017-08-24
The purpose of this study was to develop a robust, fast and markerless mobile augmented reality method for registration, geovisualization and interaction in uncontrolled outdoor environments. We propose a lightweight deep-learning-based object detection approach for mobile or embedded devices; the vision-based detection results of this approach are combined with spatial relationships by means of the host device's built-in Global Positioning System receiver, Inertial Measurement Unit and magnetometer. Virtual objects generated based on geospatial information are precisely registered in the real world, and an interaction method based on touch gestures is implemented. The entire method is independent of the network to ensure robustness to poor signal conditions. A prototype system was developed and tested on the Wuhan University campus to evaluate the method and validate its results. The findings demonstrate that our method achieves a high detection accuracy, stable geovisualization results and interaction.
Li, Jia; Xia, Changqun; Chen, Xiaowu
2017-10-12
Image-based salient object detection (SOD) has been extensively studied in past decades. However, video-based SOD is much less explored due to the lack of large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos. In constructing the dataset, we manually annotate all objects and regions over 7,650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects who free-view all videos. From the user data, we find that salient objects in a video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object/region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for videobased salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliencyguided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at the pixel, superpixel and object levels. With these saliency cues, stacked autoencoders are constructed in an unsupervised manner that automatically infers a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. In experiments, the proposed unsupervised approach is compared with 31 state-of-the-art models on the proposed dataset and outperforms 30 of them, including 19 imagebased classic (unsupervised or non-deep learning) models, six image-based deep learning models, and five video-based unsupervised models. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.
Earthquake Damage Assessment over Port-au-Prince (Haiti) by Fusing Optical and SAR Data
NASA Astrophysics Data System (ADS)
Romaniello, V.; Piscini, A.; Bignami, C.; Anniballe, R.; Pierdicca, N.; Stramondo, S.
2016-08-01
This work proposes methodologies aiming at evaluating the sensitivity of optical and SAR change features obtained from satellite images with respect to the damage grade. The proposed methods are derived from the literature ([1], [2], [3], [4]) and the main novelty concerns the estimation of these change features at object scale.The test case is the Mw 7.0 earthquake that hit Haiti on January 12, 2010.The analysis of change detection indicators is based on ground truth information collected during a post- earthquake survey. We have generated the damage map of Port-au-Prince by considering a set of polygons extracted from the open source Open Street Map geo- database. The resulting damage map was calculated in terms of collapse ratio [5].We selected some features having a good sensitivity with damage at object scale [6]: the Normalised Difference Index, the Kullback-Libler Divergence, the Mutual Information and the Intensity Correlation Difference.The Naive Bayes and the Support Vector Machine classifiers were used to evaluate the goodness of these features. The classification results demonstrate that the simultaneous use of several change features from EO observations can improve the damage estimation at object scale.
Probabilistic resident space object detection using archival THEMIS fluxgate magnetometer data
NASA Astrophysics Data System (ADS)
Brew, Julian; Holzinger, Marcus J.
2018-05-01
Recent progress in the detection of small space objects, at geosynchronous altitudes, through ground-based optical and radar measurements is demonstrated as a viable method. However, in general, these methods are limited to detection of objects greater than 10 cm. This paper examines the use of magnetometers to detect plausible flyby encounters with charged space objects using a matched filter signal existence binary hypothesis test approach. Relevant data-set processing and reduction of archival fluxgate magnetometer data from the NASA THEMIS mission is discussed in detail. Using the proposed methodology and a false alarm rate of 10%, 285 plausible detections with probability of detection greater than 80% are claimed and several are reviewed in detail.
Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space
Chen, Min; Hashimoto, Koichi
2017-01-01
Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189
Introduction to the Special Issue on Visual Working Memory
Wolfe, Jeremy M
2014-01-01
Objects are not represented individually in visual working memory (VWM), but in relation to the contextual information provided by other memorized objects. We studied whether the contextual information provided by the spatial configuration of all memorized objects is viewpoint-dependent. We ran two experiments asking participants to detect changes in locations between memory and probe for one object highlighted in the probe image. We manipulated the changes in viewpoint between memory and probe (Exp. 1: 0°, 30°, 60°; Exp. 2: 0°, 60°), as well as the spatial configuration visible in the probe image (Exp. 1: full configuration, partial configuration; Exp. 2: full configuration, no configuration). Location change detection was higher with the full spatial configuration than with the partial configuration or with no spatial configuration at viewpoint changes of 0°, thus replicating previous findings on the nonindependent representations of individual objects in VWM. Most importantly, the effect of spatial configurations decreased with increasing viewpoint changes, suggesting a viewpoint-dependent representation of contextual information in VWM. We discuss these findings within the context of this special issue, in particular whether research performed within the slots-versus-resources debate and research on the effects of contextual information might focus on two different storage systems within VWM. PMID:25341647
Object detection with a multistatic array using singular value decomposition
Hallquist, Aaron T.; Chambers, David H.
2014-07-01
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across a surface and that travels down the surface. The detection system converts the return signals from a time domain to a frequency domain, resulting in frequency return signals. The detection system then performs a singular value decomposition for each frequency to identify singular values for each frequency. The detection system then detects the presence of a subsurface object based on a comparison of the identified singular values to expected singular values when no subsurface object is present.
2018-01-01
Although the use of the surgical robot is rapidly expanding for various medical treatments, there still exist safety issues and concerns about robot-assisted surgeries due to limited vision through a laparoscope, which may cause compromised situation awareness and surgical errors requiring rapid emergency conversion to open surgery. To assist surgeon's situation awareness and preventive emergency response, this study proposes situation information guidance through a vision-based common algorithm architecture for automatic detection and tracking of intraoperative hemorrhage and surgical instruments. The proposed common architecture comprises the location of the object of interest using feature texture, morphological information, and the tracking of the object based on Kalman filter for robustness with reduced error. The average recall and precision of the instrument detection in four prostate surgery videos were 96% and 86%, and the accuracy of the hemorrhage detection in two prostate surgery videos was 98%. Results demonstrate the robustness of the automatic intraoperative object detection and tracking which can be used to enhance the surgeon's preventive state recognition during robot-assisted surgery. PMID:29854366
Visual short-term memory capacity for simple and complex objects.
Luria, Roy; Sessa, Paola; Gotler, Alex; Jolicoeur, Pierre; Dell'Acqua, Roberto
2010-03-01
Does the capacity of visual short-term memory (VSTM) depend on the complexity of the objects represented in memory? Although some previous findings indicated lower capacity for more complex stimuli, other results suggest that complexity effects arise during retrieval (due to errors in the comparison process with what is in memory) that is not related to storage limitations of VSTM, per se. We used ERPs to track neuronal activity specifically related to retention in VSTM by measuring the sustained posterior contralateral negativity during a change detection task (which required detecting if an item was changed between a memory and a test array). The sustained posterior contralateral negativity, during the retention interval, was larger for complex objects than for simple objects, suggesting that neurons mediating VSTM needed to work harder to maintain more complex objects. This, in turn, is consistent with the view that VSTM capacity depends on complexity.
Hardman, Kyle; Cowan, Nelson
2014-01-01
Visual working memory stores stimuli from our environment as representations that can be accessed by high-level control processes. This study addresses a longstanding debate in the literature about whether storage limits in visual working memory include a limit to the complexity of discrete items. We examined the issue with a number of change-detection experiments that used complex stimuli which possessed multiple features per stimulus item. We manipulated the number of relevant features of the stimulus objects in order to vary feature load. In all of our experiments, we found that increased feature load led to a reduction in change-detection accuracy. However, we found that feature load alone could not account for the results, but that a consideration of the number of relevant objects was also required. This study supports capacity limits for both feature and object storage in visual working memory. PMID:25089739
Bindings in working memory: The role of object-based attention.
Gao, Zaifeng; Wu, Fan; Qiu, Fangfang; He, Kaifeng; Yang, Yue; Shen, Mowei
2017-02-01
Over the past decade, it has been debated whether retaining bindings in working memory (WM) requires more attention than retaining constituent features, focusing on domain-general attention and space-based attention. Recently, we proposed that retaining bindings in WM needs more object-based attention than retaining constituent features (Shen, Huang, & Gao, 2015, Journal of Experimental Psychology: Human Perception and Performance, doi: 10.1037/xhp0000018 ). However, only unitized visual bindings were examined; to establish the role of object-based attention in retaining bindings in WM, more emperical evidence is required. We tested 4 new bindings that had been suggested requiring no more attention than the constituent features in the WM maintenance phase: The two constituent features of binding were stored in different WM modules (cross-module binding, Experiment 1), from auditory and visual modalities (cross-modal binding, Experiment 2), or temporally (cross-time binding, Experiments 3) or spatially (cross-space binding, Experiments 4-6) separated. In the critical condition, we added a secondary object feature-report task during the delay interval of the change-detection task, such that the secondary task competed for object-based attention with the to-be-memorized stimuli. If more object-based attention is required for retaining bindings than for retaining constituent features, the secondary task should impair the binding performance to a larger degree relative to the performance of constituent features. Indeed, Experiments 1-6 consistently revealed a significantly larger impairment for bindings than for the constituent features, suggesting that object-based attention plays a pivotal role in retaining bindings in WM.
Remote sensing based on hyperspectral data analysis
NASA Astrophysics Data System (ADS)
Sharifahmadian, Ershad
In remote sensing, accurate identification of far objects, especially concealed objects is difficult. In this study, to improve object detection from a distance, the hyperspecral imaging and wideband technology are employed with the emphasis on wideband radar. As the wideband data includes a broad range of frequencies, it can reveal information about both the surface of the object and its content. Two main contributions are made in this study: 1) Developing concept of return loss for target detection: Unlike typical radar detection methods which uses radar cross section to detect an object, it is possible to enhance the process of detection and identification of concealed targets using the wideband radar based on the electromagnetic characteristics --conductivity, permeability, permittivity, and return loss-- of materials. During the identification process, collected wideband data is evaluated with information from wideband signature library which has already been built. In fact, several classes (e.g. metal, wood, etc.) and subclasses (ex. metals with high conductivity) have been defined based on their electromagnetic characteristics. Materials in a scene are then classified based on these classes. As an example, materials with high electrical conductivity can be conveniently detected. In fact, increasing relative conductivity leads to a reduction in the return loss. Therefore, metals with high conductivity (ex. copper) shows stronger radar reflections compared with metals with low conductivity (ex. stainless steel). Thus, it is possible to appropriately discriminate copper from stainless steel. 2) Target recognition techniques: To detect and identify targets, several techniques have been proposed, in particular the Multi-Spectral Wideband Radar Image (MSWRI) which is able to localize and identify concealed targets. The MSWRI is based on the theory of robust capon beamformer. During identification process, information from wideband signature library is utilized. The WB signature library includes such parameters as conductivity, permeability, permittivity, and return loss at different frequencies for possible materials related to a target. In the MSWRI approach, identification procedure is performed by calculating the RLs at different selected frequencies. Based on similarity of the calculated RLs and RL from WB signature library, targets are detected and identified. Based on the simulation and experimental results, it is concluded that the MSWRI technique is a promising approach for standoff target detection.
Delineation of marsh types and marsh-type change in coastal Louisiana for 2007 and 2013
Hartley, Stephen B.; Couvillion, Brady R.; Enwright, Nicholas M.
2017-05-30
The Bureau of Ocean Energy Management researchers often require detailed information regarding emergent marsh vegetation types (such as fresh, intermediate, brackish, and saline) for modeling habitat capacities and mitigation. In response, the U.S. Geological Survey in cooperation with the Bureau of Ocean Energy Management produced a detailed change classification of emergent marsh vegetation types in coastal Louisiana from 2007 and 2013. This study incorporates two existing vegetation surveys and independent variables such as Landsat Thematic Mapper multispectral satellite imagery, high-resolution airborne imagery from 2007 and 2013, bare-earth digital elevation models based on airborne light detection and ranging, alternative contemporary land-cover classifications, and other spatially explicit variables. An image classification based on image objects was created from 2007 and 2013 National Agriculture Imagery Program color-infrared aerial photography. The final products consisted of two 10-meter raster datasets. Each image object from the 2007 and 2013 spatial datasets was assigned a vegetation classification by using a simple majority filter. In addition to those spatial datasets, we also conducted a change analysis between the datasets to produce a 10-meter change raster product. This analysis identified how much change has taken place and where change has occurred. The spatial data products show dynamic areas where marsh loss is occurring or where marsh type is changing. This information can be used to assist and advance conservation efforts for priority natural resources.
Hively, Lee M.
2014-09-16
Data collected from devices and human condition may be used to forewarn of critical events such as machine/structural failure or events from brain/heart wave data stroke. By monitoring the data, and determining what values are indicative of a failure forewarning, one can provide adequate notice of the impending failure in order to take preventive measures. This disclosure teaches a computer-based method to convert dynamical numeric data representing physical objects (unstructured data) into discrete-phase-space states, and hence into a graph (structured data) for extraction of condition change.
NASA Astrophysics Data System (ADS)
Fujimoto, K.; Yanagisawa, T.; Uetsuhara, M.
Automated detection and tracking of faint objects in optical, or bearing-only, sensor imagery is a topic of immense interest in space surveillance. Robust methods in this realm will lead to better space situational awareness (SSA) while reducing the cost of sensors and optics. They are especially relevant in the search for high area-to-mass ratio (HAMR) objects, as their apparent brightness can change significantly over time. A track-before-detect (TBD) approach has been shown to be suitable for faint, low signal-to-noise ratio (SNR) images of resident space objects (RSOs). TBD does not rely upon the extraction of feature points within the image based on some thresholding criteria, but rather directly takes as input the intensity information from the image file. Not only is all of the available information from the image used, TBD avoids the computational intractability of the conventional feature-based line detection (i.e., "string of pearls") approach to track detection for low SNR data. Implementation of TBD rooted in finite set statistics (FISST) theory has been proposed recently by Vo, et al. Compared to other TBD methods applied so far to SSA, such as the stacking method or multi-pass multi-period denoising, the FISST approach is statistically rigorous and has been shown to be more computationally efficient, thus paving the path toward on-line processing. In this paper, we intend to apply a multi-Bernoulli filter to actual CCD imagery of RSOs. The multi-Bernoulli filter can explicitly account for the birth and death of multiple targets in a measurement arc. TBD is achieved via a sequential Monte Carlo implementation. Preliminary results with simulated single-target data indicate that a Bernoulli filter can successfully track and detect objects with measurement SNR as low as 2.4. Although the advent of fast-cadence scientific CMOS sensors have made the automation of faint object detection a realistic goal, it is nonetheless a difficult goal, as measurements arcs in space surveillance are often both short and sparse. FISST methodologies have been applied to the general problem of SSA by many authors, but they generally focus on tracking scenarios with long arcs or assume that line detection is tractable. We will instead focus this work on estimating sensor-level kinematics of RSOs for low SNR too-short arc observations. Once said estimate is made available, track association and simultaneous initial orbit determination may be achieved via any number of proposed solutions to the too-short arc problem, such as those incorporating the admissible region. We show that the benefit of combining FISST-based TBD with too-short arc association goes both ways; i.e., the former provides consistent statistics regarding bearing-only measurements, whereas the latter makes better use of the precise dynamical models nominally applicable to RSOs in orbit determination.
Small-size pedestrian detection in large scene based on fast R-CNN
NASA Astrophysics Data System (ADS)
Wang, Shengke; Yang, Na; Duan, Lianghua; Liu, Lu; Dong, Junyu
2018-04-01
Pedestrian detection is a canonical sub-problem of object detection with high demand during recent years. Although recent deep learning object detectors such as Fast/Faster R-CNN have shown excellent performance for general object detection, they have limited success for small size pedestrian detection in large-view scene. We study that the insufficient resolution of feature maps lead to the unsatisfactory accuracy when handling small instances. In this paper, we investigate issues involving Fast R-CNN for pedestrian detection. Driven by the observations, we propose a very simple but effective baseline for pedestrian detection based on Fast R-CNN, employing the DPM detector to generate proposals for accuracy, and training a fast R-CNN style network to jointly optimize small size pedestrian detection with skip connection concatenating feature from different layers to solving coarseness of feature maps. And the accuracy is improved in our research for small size pedestrian detection in the real large scene.
Thermal bioaerosol cloud tracking with Bayesian classification
NASA Astrophysics Data System (ADS)
Smith, Christian W.; Dupuis, Julia R.; Schundler, Elizabeth C.; Marinelli, William J.
2017-05-01
The development of a wide area, bioaerosol early warning capability employing existing uncooled thermal imaging systems used for persistent perimeter surveillance is discussed. The capability exploits thermal imagers with other available data streams including meteorological data and employs a recursive Bayesian classifier to detect, track, and classify observed thermal objects with attributes consistent with a bioaerosol plume. Target detection is achieved based on similarity to a phenomenological model which predicts the scene-dependent thermal signature of bioaerosol plumes. Change detection in thermal sensor data is combined with local meteorological data to locate targets with the appropriate thermal characteristics. Target motion is tracked utilizing a Kalman filter and nearly constant velocity motion model for cloud state estimation. Track management is performed using a logic-based upkeep system, and data association is accomplished using a combinatorial optimization technique. Bioaerosol threat classification is determined using a recursive Bayesian classifier to quantify the threat probability of each tracked object. The classifier can accept additional inputs from visible imagers, acoustic sensors, and point biological sensors to improve classification confidence. This capability was successfully demonstrated for bioaerosol simulant releases during field testing at Dugway Proving Grounds. Standoff detection at a range of 700m was achieved for as little as 500g of anthrax simulant. Developmental test results will be reviewed for a range of simulant releases, and future development and transition plans for the bioaerosol early warning platform will be discussed.
Wang, Baofeng; Qi, Zhiquan; Chen, Sizhong; Liu, Zhaodu; Ma, Guocheng
2017-01-01
Vision-based vehicle detection is an important issue for advanced driver assistance systems. In this paper, we presented an improved multi-vehicle detection and tracking method using cascade Adaboost and Adaptive Kalman filter(AKF) with target identity awareness. A cascade Adaboost classifier using Haar-like features was built for vehicle detection, followed by a more comprehensive verification process which could refine the vehicle hypothesis in terms of both location and dimension. In vehicle tracking, each vehicle was tracked with independent identity by an Adaptive Kalman filter in collaboration with a data association approach. The AKF adaptively adjusted the measurement and process noise covariance through on-line stochastic modelling to compensate the dynamics changes. The data association correctly assigned different detections with tracks using global nearest neighbour(GNN) algorithm while considering the local validation. During tracking, a temporal context based track management was proposed to decide whether to initiate, maintain or terminate the tracks of different objects, thus suppressing the sparse false alarms and compensating the temporary detection failures. Finally, the proposed method was tested on various challenging real roads, and the experimental results showed that the vehicle detection performance was greatly improved with higher accuracy and robustness.
Method for Direct Measurement of Cosmic Acceleration by 21-cm Absorption Systems
NASA Astrophysics Data System (ADS)
Yu, Hao-Ran; Zhang, Tong-Jie; Pen, Ue-Li
2014-07-01
So far there is only indirect evidence that the Universe is undergoing an accelerated expansion. The evidence for cosmic acceleration is based on the observation of different objects at different distances and requires invoking the Copernican cosmological principle and Einstein's equations of motion. We examine the direct observability using recession velocity drifts (Sandage-Loeb effect) of 21-cm hydrogen absorption systems in upcoming radio surveys. This measures the change in velocity of the same objects separated by a time interval and is a model-independent measure of acceleration. We forecast that for a CHIME-like survey with a decade time span, we can detect the acceleration of a ΛCDM universe with 5σ confidence. This acceleration test requires modest data analysis and storage changes from the normal processing and cannot be recovered retroactively.
Orbital Evolution and Physical Characteristics of Object "Peggy" at the Edge of Saturn's A Ring
NASA Astrophysics Data System (ADS)
Murray, C.; Cooper, N. J.; Noyelles, B.; Renner, S.; Araujo, N.
2016-12-01
Images taken with the Cassini ISS instrument on 2013 April 15 showed the presence of a bright, extended object at the edge of Saturn's A ring. The gravitational signature of the object often appears as a discontinuity in the azimuthal profile of the ring edge and a subsequent analysis revealed that the object (nicknamed "Peggy") was detectable in ISS images as far back as 2012. The morphology of the signature is a function of the orbital phase suggesting that the object has a relative eccentricity or periapse with respect to the surrounding ring material. Tracking the signature allows a determination of the object's semi-major axis and following the initial detection this has varied by as much as 5 km. At no stage has the object been as bright as it was at the time of its discovery, suggesting that a collisional event had recently occurred. Here we report on the latest Cassini ISS observations of "Peggy" and their interpretation. These will include the calculated changes in its semi-major axis since 2013, constraints on its mass based on numerical integrations of its perturbing effect on adjacent ring particles, and what has been learned from a recent 8 h sequence of high resolution ISS images specially designed to track "Peggy" for more than half an orbital period.
Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill
2012-01-01
In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226
Class imbalance in unsupervised change detection - A diagnostic analysis from urban remote sensing
NASA Astrophysics Data System (ADS)
Leichtle, Tobias; Geiß, Christian; Lakes, Tobia; Taubenböck, Hannes
2017-08-01
Automatic monitoring of changes on the Earth's surface is an intrinsic capability and simultaneously a persistent methodological challenge in remote sensing, especially regarding imagery with very-high spatial resolution (VHR) and complex urban environments. In order to enable a high level of automatization, the change detection problem is solved in an unsupervised way to alleviate efforts associated with collection of properly encoded prior knowledge. In this context, this paper systematically investigates the nature and effects of class distribution and class imbalance in an unsupervised binary change detection application based on VHR imagery over urban areas. For this purpose, a diagnostic framework for sensitivity analysis of a large range of possible degrees of class imbalance is presented, which is of particular importance with respect to unsupervised approaches where the content of images and thus the occurrence and the distribution of classes are generally unknown a priori. Furthermore, this framework can serve as a general technique to evaluate model transferability in any two-class classification problem. The applied change detection approach is based on object-based difference features calculated from VHR imagery and subsequent unsupervised two-class clustering using k-means, genetic k-means and self-organizing map (SOM) clustering. The results from two test sites with different structural characteristics of the built environment demonstrated that classification performance is generally worse in imbalanced class distribution settings while best results were reached in balanced or close to balanced situations. Regarding suitable accuracy measures for evaluating model performance in imbalanced settings, this study revealed that the Kappa statistics show significant response to class distribution while the true skill statistic was widely insensitive to imbalanced classes. In general, the genetic k-means clustering algorithm achieved the most robust results with respect to class imbalance while the SOM clustering exhibited a distinct optimization towards a balanced distribution of classes.
Rapid Disaster Analysis based on Remote Sensing: A Case Study about the Tohoku Tsunami Disaster 2011
NASA Astrophysics Data System (ADS)
Yang, C. H.; Soergel, U.; Lanaras, Ch.; Baltsavias, E.; Cho, K.; Remondino, F.; Wakabayashi, H.
2014-09-01
In this study, we present first results of RAPIDMAP, a project funded by European Union in a framework aiming to foster the cooperation of European countries with Japan in R&D. The main objective of RAPIDMAP is to construct a Decision Support System (DSS) based on remote sensing data and WebGIS technologies, where users can easily access real-time information assisting with disaster analysis. In this paper, we present a case study of the Tohoku Tsunami Disaster 2011. We address two approaches namely change detection based on SAR data and co-registration of optical and SAR satellite images. With respect to SAR data, our efforts are subdivided into three parts: (1) initial coarse change detection for entire area, (2) flood area detection, and (3) linearfeature change detection. The investigations are based on pre- and post-event TerraSAR-X images. In (1), two pre- and post-event TerraSAR-X images are accurately co-registered and radiometrically calibrated. Data are fused in a false-color image that provides a quick and rough overview of potential changes, which is useful for initial decision making and identifying areas worthwhile to be analysed further in more depth. However, a bunch of inevitable false alarms appear within the scene caused by speckle, temporal decorrelation, co-registration inaccuracy and so on. In (2), the post-event TerraSAR-X data are used to extract the flood area by using thresholding and morphological approaches. The validated result indicates that using SAR data combining with suitable morphological approaches is a quick and effective way to detect flood area. Except for usage of SAR data, the false-color image composed of optical images are also used to detect flood area for further exploration in this part. In (3), Curvelet filtering is applied in the difference image of pre- and post-event TerraSAR-X images not only to suppress false alarms of irregular-features, but also to enhance the change signals of linear-features (e.g. buildings) in settlements. Afterwards, thresholding is exploited to extract the linear-feature changes. In rapid mapping of disasters various sensors are often employed, including optical and SAR, since they provide complementary information. Such data needs to be analyzed in an integrated fashion and the results from each dataset should be integrated in a GIS with a common coordinate reference system. Thus, if no orthoimages can be generated, the images should be co-registered employing matching of common features. We present results of co-registration between optical (FORMOSAT-2) and TerraSAR-X images based on different matching methods, and also techniques for detecting and eliminating matching errors.
Progressive disease in glioblastoma: Benefits and limitations of semi-automated volumetry
Alber, Georgina; Bette, Stefanie; Kaesmacher, Johannes; Boeckh-Behrens, Tobias; Gempt, Jens; Ringel, Florian; Specht, Hanno M.; Meyer, Bernhard; Zimmer, Claus
2017-01-01
Purpose Unambiguous evaluation of glioblastoma (GB) progression is crucial, both for clinical trials as well as day by day routine management of GB patients. 3D-volumetry in the follow-up of GB provides quantitative data on tumor extent and growth, and therefore has the potential to facilitate objective disease assessment. The present study investigated the utility of absolute changes in volume (delta) or regional, segmentation-based subtractions for detecting disease progression in longitudinal MRI follow-ups. Methods 165 high resolution 3-Tesla MRIs of 30 GB patients (23m, mean age 60.2y) were retrospectively included in this single center study. Contrast enhancement (CV) and tumor-related signal alterations in FLAIR images (FV) were semi-automatically segmented. Delta volume (dCV, dFV) and regional subtractions (sCV, sFV) were calculated. Disease progression was classified for every follow-up according to histopathologic results, decisions of the local multidisciplinary CNS tumor board and a consensus rating of the neuro-radiologic report. Results A generalized logistic mixed model for disease progression (yes / no) with dCV, dFV, sCV and sFV as input variables revealed that only dCV was significantly associated with prediction of disease progression (P = .005). Delta volume had a better accuracy than regional, segmentation-based subtractions (79% versus 72%) and a higher area under the curve by trend in ROC curves (.83 versus .75). Conclusion Absolute volume changes of the contrast enhancing tumor part were the most accurate volumetric determinant to detect progressive disease in assessment of GB and outweighed FLAIR changes as well as regional, segmentation-based image subtractions. This parameter might be useful in upcoming objective response criteria for glioblastoma. PMID:28245291
Non-Verbal Communicative Signals Modulate Attention to Object Properties
Marno, Hanna; Davelaar, Eddy J.; Csibra, Gergely
2015-01-01
We investigated whether the social context in which an object is experienced influences the encoding of its various properties. We hypothesized that when an object is observed in a communicative context, its intrinsic features (such as its shape) would be preferentially encoded at the expense of its extrinsic properties (such as its location). In the three experiments, participants were presented with brief movies, in which an actor either performed a non-communicative action towards one of five different meaningless objects, or communicatively pointed at one of them. A subsequent static image, in which either the location or the identity of an object changed, tested participants’ attention to these two kinds of information. Throughout the three experiments we found that communicative cues tended to facilitate identity change detection and to impede location change detection, while in the non-communicative contexts we did not find such a bidirectional effect of cueing. The results also revealed that the effect of the communicative context was due to the presence of ostensive-communicative signals before the object-directed action, and not to the pointing gesture per se. We propose that such an attentional bias forms an inherent part of human communication, and function to facilitate social learning by communication. PMID:24294871
Nonverbal communicative signals modulate attention to object properties.
Marno, Hanna; Davelaar, Eddy J; Csibra, Gergely
2014-04-01
We investigated whether the social context in which an object is experienced influences the encoding of its various properties. We hypothesized that when an object is observed in a communicative context, its intrinsic features (such as its shape) would be preferentially encoded at the expense of its extrinsic properties (such as its location). In 3 experiments, participants were presented with brief movies, in which an actor either performed a noncommunicative action toward 1 of 5 different meaningless objects, or communicatively pointed at 1 of them. A subsequent static image, in which either the location or the identity of an object changed, tested participants' attention to these 2 kinds of information. Throughout the 3 experiments we found that communicative cues tended to facilitate identity change detection and to impede location change detection, whereas in the noncommunicative contexts we did not find such a bidirectional effect of cueing. The results also revealed that the effect of the communicative context was a result the presence of ostensive-communicative signals before the object-directed action, and not to the pointing gesture per se. We propose that such an attentional bias forms an inherent part of human communication, and function to facilitate social learning by communication.
Utilization of ALOS PALSAR-2 Data for Mangrove Detection Using OBIA Method Approach
NASA Astrophysics Data System (ADS)
Anggraini, N.; Julzarika, A.
2017-12-01
Mangroves have an important role for climate change mitigation. This is because mangroves have high carbon stock potential. The ability of mangroves to absorb carbon is very high and it is estimated that the mangrove carbon stock reaches 1023 Mg C. The current problem is the area of mangrove forest is decreasing due to land conversion. One technology that can be used to detect changes in the area of mangrove forest is by utilizing ALOS PALSAR-2 satellite imagery. The purpose of this research is to detect mangrove forest area from ALOS PALSAR-2 data by using object-based image analysis (OBIA) method. The location of the study is Taman Nasional Sembilang in Banyuasin Regency of South Sumatra. The data used are ALOS PALSAR-2 dualpolarization (HH and HV), recording year 2015. The calculation of mangrove forest area in Sembilang National Park has ∼ 82% accuracy. The results of this study can be used for various applications and mapping activities.
Spectral pattern classification in lidar data for rock identification in outcrops.
Campos Inocencio, Leonardo; Veronez, Mauricio Roberto; Wohnrath Tognoli, Francisco Manoel; de Souza, Marcelo Kehl; da Silva, Reginaldo Macedônio; Gonzaga, Luiz; Blum Silveira, César Leonardo
2014-01-01
The present study aimed to develop and implement a method for detection and classification of spectral signatures in point clouds obtained from terrestrial laser scanner in order to identify the presence of different rocks in outcrops and to generate a digital outcrop model. To achieve this objective, a software based on cluster analysis was created, named K-Clouds. This software was developed through a partnership between UNISINOS and the company V3D. This tool was designed to begin with an analysis and interpretation of a histogram from a point cloud of the outcrop and subsequently indication of a number of classes provided by the user, to process the intensity return values. This classified information can then be interpreted by geologists, to provide a better understanding and identification from the existing rocks in the outcrop. Beyond the detection of different rocks, this work was able to detect small changes in the physical-chemical characteristics of the rocks, as they were caused by weathering or compositional changes.
Two visual systems in monitoring of dynamic traffic: effects of visual disruption.
Zheng, Xianjun Sam; McConkie, George W
2010-05-01
Studies from neurophysiology and neuropsychology provide support for two separate object- and location-based visual systems, ventral and dorsal. In the driving context, a study was conducted using a change detection paradigm to explore drivers' ability to monitor the dynamic traffic flow, and the effects of visual disruption on these two visual systems. While driving, a discrete change, such as vehicle location, color, or identity, was occasionally made in one of the vehicles on the road ahead of the driver. Experiment results show that without visual disruption, all changes were detected very well; yet, these equally perceivable changes were disrupted differently by a brief blank display (150 ms): the detection of location changes was especially reduced. The disruption effects were also bigger for the parked vehicle compared to the moving ones. The findings support the different roles for two visual systems in monitoring the dynamic traffic: the "where", dorsal system, tracks vehicle spatiotemporal information on perceptual level, encoding information in a coarse and transient manner; whereas the "what", ventral system, monitors vehicles' featural information, encoding information more accurately and robustly. Both systems work together contributing to the driver's situation awareness of traffic. Benefits and limitations of using the driving simulation are also discussed. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Nonstationary EO/IR Clutter Suppression and Dim Object Tracking
NASA Astrophysics Data System (ADS)
Tartakovsky, A.; Brown, A.; Brown, J.
2010-09-01
We develop and evaluate the performance of advanced algorithms which provide significantly improved capabilities for automated detection and tracking of ballistic and flying dim objects in the presence of highly structured intense clutter. Applications include ballistic missile early warning, midcourse tracking, trajectory prediction, and resident space object detection and tracking. The set of algorithms include, in particular, adaptive spatiotemporal clutter estimation-suppression and nonlinear filtering-based multiple-object track-before-detect. These algorithms are suitable for integration into geostationary, highly elliptical, or low earth orbit scanning or staring sensor suites, and are based on data-driven processing that adapts to real-world clutter backgrounds, including celestial, earth limb, or terrestrial clutter. In many scenarios of interest, e.g., for highly elliptic and, especially, low earth orbits, the resulting clutter is highly nonstationary, providing a significant challenge for clutter suppression to or below sensor noise levels, which is essential for dim object detection and tracking. We demonstrate the success of the developed algorithms using semi-synthetic and real data. In particular, our algorithms are shown to be capable of detecting and tracking point objects with signal-to-clutter levels down to 1/1000 and signal-to-noise levels down to 1/4.
The Kepler Mission: A Search for Terrestrial Planets - Development Status
NASA Technical Reports Server (NTRS)
Koch, David; Borucki, W.; Mayer, D.; Caldwell, D.; Jenkens, J.; Dunham, E.; Geary, J.; Bachtell, E.; Deininger, W.; Philbrick, R.
2003-01-01
We have embarked on a mission to detect terrestrial planets. The space mission has been optimized to search for earth-size planets (0.5 to 10 earth masses) in the habitable zone (HZ) of solar-like stars. Given this design, the mission will necessarily be capable of not only detecting Earth analogs, but a wide range of planetary types and characteristics ranging from Mercury-size objects with orbital periods of days to gas-giants in decade long orbits that have undeniable signatures even with only one transit detected. The mission is designed to survey the full range of spectral-type dwarf stars. The approach is to detect the periodic signal of transiting planets. Three or more transits of a star exceeding a combined threshold of eight sigma with a statistically consistent period, brightness change and duration provide a rigorous method of detection. From the relative brightness change the planet size can be calculated. From the period the orbital size can be calculated and its location relative to the HZ determined. Presented here are: the mission goals, the top level system design requirements derived from these goals that drive the flight system design, a number of the trades that have lead to the mission concept, expected photometric performance dependence on stellar brightness and spectral type based on the system 'noise tree' analysis. Updated estimates are presented of the numbers of detectable planets versus size, orbit, stellar spectral type and distances based on a planet frequency hypothesis. The current project schedule and organization are given.
Monocular Vision-Based Underwater Object Detection
Zhang, Zhen; Dai, Fengzhao; Bu, Yang; Wang, Huibin
2017-01-01
In this paper, we propose an underwater object detection method using monocular vision sensors. In addition to commonly used visual features such as color and intensity, we investigate the potential of underwater object detection using light transmission information. The global contrast of various features is used to initially identify the region of interest (ROI), which is then filtered by the image segmentation method, producing the final underwater object detection results. We test the performance of our method with diverse underwater datasets. Samples of the datasets are acquired by a monocular camera with different qualities (such as resolution and focal length) and setups (viewing distance, viewing angle, and optical environment). It is demonstrated that our ROI detection method is necessary and can largely remove the background noise and significantly increase the accuracy of our underwater object detection method. PMID:28771194
Object-Oriented Image Clustering Method Using UAS Photogrammetric Imagery
NASA Astrophysics Data System (ADS)
Lin, Y.; Larson, A.; Schultz-Fellenz, E. S.; Sussman, A. J.; Swanson, E.; Coppersmith, R.
2016-12-01
Unmanned Aerial Systems (UAS) have been used widely as an imaging modality to obtain remotely sensed multi-band surface imagery, and are growing in popularity due to their efficiency, ease of use, and affordability. Los Alamos National Laboratory (LANL) has employed the use of UAS for geologic site characterization and change detection studies at a variety of field sites. The deployed UAS equipped with a standard visible band camera to collect imagery datasets. Based on the imagery collected, we use deep sparse algorithmic processing to detect and discriminate subtle topographic features created or impacted by subsurface activities. In this work, we develop an object-oriented remote sensing imagery clustering method for land cover classification. To improve the clustering and segmentation accuracy, instead of using conventional pixel-based clustering methods, we integrate the spatial information from neighboring regions to create super-pixels to avoid salt-and-pepper noise and subsequent over-segmentation. To further improve robustness of our clustering method, we also incorporate a custom digital elevation model (DEM) dataset generated using a structure-from-motion (SfM) algorithm together with the red, green, and blue (RGB) band data for clustering. In particular, we first employ an agglomerative clustering to create an initial segmentation map, from where every object is treated as a single (new) pixel. Based on the new pixels obtained, we generate new features to implement another level of clustering. We employ our clustering method to the RGB+DEM datasets collected at the field site. Through binary clustering and multi-object clustering tests, we verify that our method can accurately separate vegetation from non-vegetation regions, and are also able to differentiate object features on the surface.
Human Location Detection System Using Micro-Electromechanical Sensor for Intelligent Fan
NASA Astrophysics Data System (ADS)
Parnin, S.; Rahman, M. M.
2017-03-01
This paper presented the development of sensory system for detection of both the presence and the location of human in a room spaces using MEMS Thermal sensor. The system is able to detect the surface temperature of occupants by a non-contact detection at the maximum of 6 meters far. It can be integrated to any swing type of electrical appliances such as standing fan or a similar devices. Differentiating human from other moving and or static object by heat variable is nearly impossible since human, animals and electrical appliances produce heat. The uncontrollable heat properties which can change and transfer will add to the detection issue. Integrating the low cost MEMS based thermal sensor can solve the first of human sensing problem by its ability to detect human in stationary. Further discrimination and analysis must therefore be made to the measured temperature data to distinguish human from other objects. In this project, the fan is properly designed and program in such a way that it can adapt to different events starting from the human sensing stage to its dynamic and mechanical moving parts. Up to this stage initial testing to the Omron D6T microelectromechanical thermal sensor is currently under several experimental stages. Experimental result of the sensor tested on stationary and motion state of human are behaviorally differentiable and successfully locate the human position by detecting the maximum temperature of each sensor reading.
Perceptual Wholes Can Reduce the Conscious Accessibility of Their Parts
ERIC Educational Resources Information Center
Poljac, Ervin; de-Wit, Lee; Wagemans, Johan
2012-01-01
Humans can rapidly extract object and category information from an image despite surprising limitations in detecting changes to the individual parts of that image. In this article we provide evidence that the construction of a perceptual whole, or Gestalt, reduces awareness of changes to the parts of this object. This result suggests that the…
NASA Astrophysics Data System (ADS)
Kwon, Seong Kyung; Hyun, Eugin; Lee, Jin-Hee; Lee, Jonghun; Son, Sang Hyuk
2017-11-01
Object detections are critical technologies for the safety of pedestrians and drivers in autonomous vehicles. Above all, occluded pedestrian detection is still a challenging topic. We propose a new detection scheme for occluded pedestrian detection by means of lidar-radar sensor fusion. In the proposed method, the lidar and radar regions of interest (RoIs) have been selected based on the respective sensor measurement. Occluded depth is a new means to determine whether an occluded target exists or not. The occluded depth is a region projected out by expanding the longitudinal distance with maintaining the angle formed by the outermost two end points of the lidar RoI. The occlusion RoI is the overlapped region made by superimposing the radar RoI and the occluded depth. The object within the occlusion RoI is detected by the radar measurement information and the occluded object is estimated as a pedestrian based on human Doppler distribution. Additionally, various experiments are performed in detecting a partially occluded pedestrian in outdoor as well as indoor environments. According to experimental results, the proposed sensor fusion scheme has much better detection performance compared to the case without our proposed method.
(Semi-)Automated landform mapping of the alpine valley Gradental (Austria) based on LiDAR data
NASA Astrophysics Data System (ADS)
Strasser, T.; Eisank, C.
2012-04-01
Alpine valleys are typically characterised as complex, hierarchical structured systems with rapid landform changes. Detection of landform changes can be supported by automated geomorphological mapping. Especially, the analysis over short time scales require a method for standardised, unbiased geomorphological map reproduction, which is delivered by automated mapping techniques. In general, digital geomorphological mapping is a challenging task, since knowledge about landforms with respect to their natural boundaries as well as their hierarchical and scaling relationships, has to be integrated in an objective way. A combination of very-high spatial resolution data (VHSR) such as LiDAR and new methods like object based image analysis (OBIA) allow for a more standardised production of geomorphological maps. In OBIA the processing units are spatially configured objects that are created by multi-scale segmentation. Therefore, not only spectral information can be used for assigning the objects to geomorphological classes, but also spatial and topological properties can be exploited. In this study we focus on the detection of landforms, especially bedrock sediment deposits (alluvion, debris cone, talus, moraine, rockglacier), as well as glaciers. The study site Gradental [N 46°58'29.1"/ E 12°48'53.8"] is located in the Schobergruppe (Austria, Carinthia) and is characterised by heterogenic geology conditions and high process activity. The area is difficult to access and dominated by steep slopes, thus hindering a fast and detailed geomorphological field mapping. Landforms are identified using aerial and terrestrial LiDAR data (1 m spatial resolution). These DEMs are analysed by an object based hierarchical approach, which is structured in three main steps. The first step is to define occurring landforms by basic land surface parameters (LSPs), topology and hierarchy relations. Based on those definitions a semantic model is created. Secondly, a multi-scale segmentation is performed on a three-band LSP that integrates slope, aspect and plan curvature, which expresses the driving forces of geomorphological processes. In the third step, the generated multi-level object structures are classified in order to produce the geomorphological map. The classification rules are derived from the semantic model. Due to landform type-specific scale dependencies of LSPs, the values of LSPs used in the classification are calculated in a multi-scale manner by constantly enlarging the size of the moving window. In addition, object form properties (density, compactness, rectangular fit) are utilised as additional information for landform characterisation. Validation of classification is performed by intersecting a visually interpreted reference map with the classification output map and calculating accuracy matrices. Validation shows an overall accuracy of 78.25 % and a Kappa of 0.65. The natural borders of landforms can be easily detected by the use of slope, aspect and plan curvature. This study illustrates the potential of OBIA for a more standardised and automated mapping of surface units (landforms, landcover). Therefore, the presented methodology features a prospective automated geomorphological mapping approach for alpine regions.
Aerial vehicles collision avoidance using monocular vision
NASA Astrophysics Data System (ADS)
Balashov, Oleg; Muraviev, Vadim; Strotov, Valery
2016-10-01
In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.
Sex Differences in Object Location Memory: The Female Advantage of Immediate Detection of Changes
ERIC Educational Resources Information Center
Honda, Akio; Nihei, Yoshiaki
2009-01-01
Object location memory has been considered the only spatial ability in which females display an advantage over males. We examined sex differences in long-term object location memory. After participants studied an array of objects, they were asked to recall the locations of these objects three minutes later or one week later. Results showed a…
NASA Astrophysics Data System (ADS)
Spirou, Gloria M.; Vitkin, I. Alex; Wilson, B. C.; Whelan, William M.; Henrichs, Paul M.; Mehta, Ketan; Miller, Tom; Yee, Andrew; Meador, James; Oraevsky, Alexander A.
2004-07-01
Laser Optoacoustic Imaging System (LOIS) combines high tissue contrast based on the optical properties of tissue and high spatial resolution based on ultrawide-band ultrasonic detection. Patients undergoing thermal or photodynamic therapy of prostate cancer may benefit from capability of LOIS to detect and monitor treatment-induced changes in tissue optical properties and blood flow. The performance of a prototype LOIS was evaluated via 2D optoacoustic images of dye-colored objects of various shapes, small tubes with blood simulating veins and arteries, and thermally coagulated portions of chicken breasts imbedded tissue-mimicking gelatin phantoms. The optoacoustic image contrast was proportional to the ratio of the absorption coefficient between the embedded objects and the surrounding gel. The contrast of the venous blood relative to the background exceeded 250%, and the contrast of the thermally coagulated portions of flesh relative to the untreated tissue ranged between -100% to +200%, dependent on the optical wavelength. We used a 32-element optoacoustic transducer array and a novel design of low-noise preamplifiers and wide-band amplifiers to perform these studies. The system was optimized for imaging at a depth of ~50 mm. The system spatial resolution was better than 1-mm. The advantages and limitations of various signal-processing methods were investigated. LOIS demonstrates clinical potential for non- or minimally-invasive monitoring of treatment-induced tissue changes.
NASA Astrophysics Data System (ADS)
Campbell, A.; Wang, Y.
2017-12-01
Salt marshes are under increasing pressure due to anthropogenic stressors including sea level rise, nutrient enrichment, herbivory and disturbances. Salt marsh losses risk the important ecosystem services they provide including biodiversity, water filtration, wave attenuation, and carbon sequestration. This study determines salt marsh change on Fire Island National Seashore, a barrier island along the south shore of Long Island, New York. Object-based image analysis was used to classifying Worldview-2, high resolution satellite, and topobathymetric LiDAR. The site was impacted by Hurricane Sandy in October of 2012 causing a breach in the Barrier Island and extensive overwash. In situ training data from vegetation plots were used to train the Random Forest classifier. The object-based Worldview-2 classification achieved an overall classification accuracy of 92.75. Salt marsh change for the study site was determined by comparing the 2015 classification with a 1997 classification. The study found a shift from high marsh to low marsh and a reduction in Phragmites on Fire Island. Vegetation losses were observed along the edge of the marsh and in the marsh interior. The analysis agreed with many of the trends found throughout the region including the reduction of high marsh and decline of salt marsh. The reduction in Phragmites could be due to the species shrinking niche between rising seas and dune vegetation on barrier islands. The complex management issues facing salt marsh across the United States including sea level rise and eutrophication necessitate very high resolution classification and change detection of salt marsh to inform management decisions such as restoration, salt marsh migration, and nutrient inputs.
Vehicle detection and orientation estimation using the radon transform
NASA Astrophysics Data System (ADS)
Pelapur, Rengarajan; Bunyak, Filiz; Palaniappan, Kannappan; Seetharaman, Gunasekaran
2013-05-01
Determining the location and orientation of vehicles in satellite and airborne imagery is a challenging task given the density of cars and other vehicles and complexity of the environment in urban scenes almost anywhere in the world. We have developed a robust and accurate method for detecting vehicles using a template-based directional chamfer matching, combined with vehicle orientation estimation based on a refined segmentation, followed by a Radon transform based profile variance peak analysis approach. The same algorithm was applied to both high resolution satellite imagery and wide area aerial imagery and initial results show robustness to illumination changes and geometric appearance distortions. Nearly 80% of the orientation angle estimates for 1585 vehicles across both satellite and aerial imagery were accurate to within 15? of the ground truth. In the case of satellite imagery alone, nearly 90% of the objects have an estimated error within +/-1.0° of the ground truth.
Self-Learning Embedded System for Object Identification in Intelligent Infrastructure Sensors.
Villaverde, Monica; Perez, David; Moreno, Felix
2015-11-17
The emergence of new horizons in the field of travel assistant management leads to the development of cutting-edge systems focused on improving the existing ones. Moreover, new opportunities are being also presented since systems trend to be more reliable and autonomous. In this paper, a self-learning embedded system for object identification based on adaptive-cooperative dynamic approaches is presented for intelligent sensor's infrastructures. The proposed system is able to detect and identify moving objects using a dynamic decision tree. Consequently, it combines machine learning algorithms and cooperative strategies in order to make the system more adaptive to changing environments. Therefore, the proposed system may be very useful for many applications like shadow tolls since several types of vehicles may be distinguished, parking optimization systems, improved traffic conditions systems, etc.
An object tracking method based on guided filter for night fusion image
NASA Astrophysics Data System (ADS)
Qian, Xiaoyan; Wang, Yuedong; Han, Lei
2016-01-01
Online object tracking is a challenging problem as it entails learning an effective model to account for appearance change caused by intrinsic and extrinsic factors. In this paper, we propose a novel online object tracking with guided image filter for accurate and robust night fusion image tracking. Firstly, frame difference is applied to produce the coarse target, which helps to generate observation models. Under the restriction of these models and local source image, guided filter generates sufficient and accurate foreground target. Then accurate boundaries of the target can be extracted from detection results. Finally timely updating for observation models help to avoid tracking shift. Both qualitative and quantitative evaluations on challenging image sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-art methods.
Automated image based prominent nucleoli detection
Yap, Choon K.; Kalaw, Emarene M.; Singh, Malay; Chong, Kian T.; Giron, Danilo M.; Huang, Chao-Hui; Cheng, Li; Law, Yan N.; Lee, Hwee Kuan
2015-01-01
Introduction: Nucleolar changes in cancer cells are one of the cytologic features important to the tumor pathologist in cancer assessments of tissue biopsies. However, inter-observer variability and the manual approach to this work hamper the accuracy of the assessment by pathologists. In this paper, we propose a computational method for prominent nucleoli pattern detection. Materials and Methods: Thirty-five hematoxylin and eosin stained images were acquired from prostate cancer, breast cancer, renal clear cell cancer and renal papillary cell cancer tissues. Prostate cancer images were used for the development of a computer-based automated prominent nucleoli pattern detector built on a cascade farm. An ensemble of approximately 1000 cascades was constructed by permuting different combinations of classifiers such as support vector machines, eXclusive component analysis, boosting, and logistic regression. The output of cascades was then combined using the RankBoost algorithm. The output of our prominent nucleoli pattern detector is a ranked set of detected image patches of patterns of prominent nucleoli. Results: The mean number of detected prominent nucleoli patterns in the top 100 ranked detected objects was 58 in the prostate cancer dataset, 68 in the breast cancer dataset, 86 in the renal clear cell cancer dataset, and 76 in the renal papillary cell cancer dataset. The proposed cascade farm performs twice as good as the use of a single cascade proposed in the seminal paper by Viola and Jones. For comparison, a naive algorithm that randomly chooses a pixel as a nucleoli pattern would detect five correct patterns in the first 100 ranked objects. Conclusions: Detection of sparse nucleoli patterns in a large background of highly variable tissue patterns is a difficult challenge our method has overcome. This study developed an accurate prominent nucleoli pattern detector with the potential to be used in the clinical settings. PMID:26167383
Automated image based prominent nucleoli detection.
Yap, Choon K; Kalaw, Emarene M; Singh, Malay; Chong, Kian T; Giron, Danilo M; Huang, Chao-Hui; Cheng, Li; Law, Yan N; Lee, Hwee Kuan
2015-01-01
Nucleolar changes in cancer cells are one of the cytologic features important to the tumor pathologist in cancer assessments of tissue biopsies. However, inter-observer variability and the manual approach to this work hamper the accuracy of the assessment by pathologists. In this paper, we propose a computational method for prominent nucleoli pattern detection. Thirty-five hematoxylin and eosin stained images were acquired from prostate cancer, breast cancer, renal clear cell cancer and renal papillary cell cancer tissues. Prostate cancer images were used for the development of a computer-based automated prominent nucleoli pattern detector built on a cascade farm. An ensemble of approximately 1000 cascades was constructed by permuting different combinations of classifiers such as support vector machines, eXclusive component analysis, boosting, and logistic regression. The output of cascades was then combined using the RankBoost algorithm. The output of our prominent nucleoli pattern detector is a ranked set of detected image patches of patterns of prominent nucleoli. The mean number of detected prominent nucleoli patterns in the top 100 ranked detected objects was 58 in the prostate cancer dataset, 68 in the breast cancer dataset, 86 in the renal clear cell cancer dataset, and 76 in the renal papillary cell cancer dataset. The proposed cascade farm performs twice as good as the use of a single cascade proposed in the seminal paper by Viola and Jones. For comparison, a naive algorithm that randomly chooses a pixel as a nucleoli pattern would detect five correct patterns in the first 100 ranked objects. Detection of sparse nucleoli patterns in a large background of highly variable tissue patterns is a difficult challenge our method has overcome. This study developed an accurate prominent nucleoli pattern detector with the potential to be used in the clinical settings.
Rhythmic Sampling within and between Objects despite Sustained Attention at a Cued Location
Fiebelkorn, Ian C.; Saalmann, Yuri B.; Kastner, Sabine
2013-01-01
SUMMARY The brain directs its limited processing resources through various selection mechanisms, broadly referred to as attention. The present study investigated the temporal dynamics of two such selection mechanisms: space- and object-based selection. Previous evidence has demonstrated that preferential processing resulting from a spatial cue (i.e., space-based selection) spreads to uncued locations, if those locations are part of the same object (i.e., resulting in object-based selection). But little is known about the relationship between these fundamental selection mechanisms. Here, we used human behavioral data to determine how space- and object-based selection simultaneously evolve under conditions that promote sustained attention at a cued location, varying the cue-to-target interval from 300—1100 ms. We tracked visual-target detection at a cued location (i.e., space-based selection), at an uncued location that was part of the same object (i.e., object-based selection), and at an uncued location that was part of a different object (i.e., in the absence of space- and object-based selection). The data demonstrate that even under static conditions, there is a moment-to-moment reweighting of attentional priorities based on object properties. This reweighting is revealed through rhythmic patterns of visual-target detection both within (at 8 Hz) and between (at 4 Hz) objects. PMID:24316204
NASA Astrophysics Data System (ADS)
Akay, S. S.; Sertel, E.
2016-06-01
Urban land cover/use changes like urbanization and urban sprawl have been impacting the urban ecosystems significantly therefore determination of urban land cover/use changes is an important task to understand trends and status of urban ecosystems, to support urban planning and to aid decision-making for urban-based projects. High resolution satellite images could be used to accurately, periodically and quickly map urban land cover/use and their changes by time. This paper aims to determine urban land cover/use changes in Gaziantep city centre between 2010 and 2105 using object based images analysis and high resolution SPOT 5 and SPOT 6 images. 2.5 m SPOT 5 image obtained in 5th of June 2010 and 1.5 m SPOT 6 image obtained in 7th of July 2015 were used in this research to precisely determine land changes in five-year period. In addition to satellite images, various ancillary data namely Normalized Difference Vegetation Index (NDVI), Difference Water Index (NDWI) maps, cadastral maps, OpenStreetMaps, road maps and Land Cover maps, were integrated into the classification process to produce high accuracy urban land cover/use maps for these two years. Both images were geometrically corrected to fulfil the 1/10,000 scale geometric accuracy. Decision tree based object oriented classification was applied to identify twenty different urban land cover/use classes defined in European Urban Atlas project. Not only satellite images and satellite image-derived indices but also different thematic maps were integrated into decision tree analysis to create rule sets for accurate mapping of each class. Rule sets of each satellite image for the object based classification involves spectral, spatial and geometric parameter to automatically produce urban map of the city centre region. Total area of each class per related year and their changes in five-year period were determined and change trend in terms of class transformation were presented. Classification accuracy assessment was conducted by creating a confusion matrix to illustrate the thematic accuracy of each class.
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
Rapid Processing of a Global Feature in the ON Visual Pathways of Behaving Monkeys.
Huang, Jun; Yang, Yan; Zhou, Ke; Zhao, Xudong; Zhou, Quan; Zhu, Hong; Yang, Yingshan; Zhang, Chunming; Zhou, Yifeng; Zhou, Wu
2017-01-01
Visual objects are recognized by their features. Whereas, some features are based on simple components (i.e., local features, such as orientation of line segments), some features are based on the whole object (i.e., global features, such as an object having a hole in it). Over the past five decades, behavioral, physiological, anatomical, and computational studies have established a general model of vision, which starts from extracting local features in the lower visual pathways followed by a feature integration process that extracts global features in the higher visual pathways. This local-to-global model is successful in providing a unified account for a vast sets of perception experiments, but it fails to account for a set of experiments showing human visual systems' superior sensitivity to global features. Understanding the neural mechanisms underlying the "global-first" process will offer critical insights into new models of vision. The goal of the present study was to establish a non-human primate model of rapid processing of global features for elucidating the neural mechanisms underlying differential processing of global and local features. Monkeys were trained to make a saccade to a target in the black background, which was different from the distractors (white circle) in color (e.g., red circle target), local features (e.g., white square target), a global feature (e.g., white ring with a hole target) or their combinations (e.g., red square target). Contrary to the predictions of the prevailing local-to-global model, we found that (1) detecting a distinction or a change in the global feature was faster than detecting a distinction or a change in color or local features; (2) detecting a distinction in color was facilitated by a distinction in the global feature, but not in the local features; and (3) detecting the hole was interfered by the local features of the hole (e.g., white ring with a squared hole). These results suggest that monkey ON visual systems have a subsystem that is more sensitive to distinctions in the global feature than local features. They also provide the behavioral constraints for identifying the underlying neural substrates.
NASA Astrophysics Data System (ADS)
Bayramov, Emil; Mammadov, Ramiz
2016-07-01
The main goals of this research are the object-based landcover classification of LANDSAT-8 multi-spectral satellite images in 2014 and 2015, quantification of Normalized Difference Vegetation Indices (NDVI) rates within the land-cover classes, change detection analysis between the NDVIs derived from multi-temporal LANDSAT-8 satellite images and the quantification of those changes within the land-cover classes and detection of changes between land-cover classes. The object-based classification accuracy of the land-cover classes was validated through the standard confusion matrix which revealed 80 % of land-cover classification accuracy for both years. The analysis revealed that the area of agricultural lands increased from 30911 sq. km. in 2014 to 31999 sq. km. in 2015. The area of barelands increased from 3933 sq. km. in 2014 to 4187 sq. km. in 2015. The area of forests increased from 8211 sq. km. in 2014 to 9175 sq. km. in 2015. The area of grasslands decreased from 27176 sq. km. in 2014 to 23294 sq. km. in 2015. The area of urban areas increased from 12479 sq. km. in 2014 to 12956 sq. km. in 2015. The decrease in the area of grasslands was mainly explained by the landuse shifts of grasslands to agricultural and urban lands. The quantification of low and medium NDVI rates revealed the increase within the agricultural, urban and forest land-cover classes in 2015. However, the high NDVI rates within agricultural, urban and forest land-cover classes in 2015 revealed to be lower relative to 2014. The change detection analysis between landscover types of 2014 and 2015 allowed to determine that 7740 sq. km. of grasslands shifted to agricultural landcover type whereas 5442sq. km. of agricultural lands shifted to rangelands. This means that the spatio-temporal patters of agricultural activities occurred in Azerbaijan because some of the areas reduced agricultural activities whereas some of them changed their landuse type to agricultural. Based on the achieved results, it is possible to conclude that the area of agricultural lands in Azerbaijan increased from 2014 to 2015. The crop productivity also increased in the croplands, however some of the areas showed lower productivity in 2015 relative to 2014.
Attention During Natural Vision Warps Semantic Representation Across the Human Brain
Çukur, Tolga; Nishimoto, Shinji; Huth, Alexander G.; Gallant, Jack L.
2013-01-01
Little is known about how attention changes the cortical representation of sensory information in humans. Based on neurophysiological evidence, we hypothesized that attention causes tuning changes to expand the representation of attended stimuli at the cost of unattended stimuli. To investigate this issue we used functional MRI (fMRI) to measure how semantic representation changes when searching for different object categories in natural movies. We find that many voxels across occipito-temporal and fronto-parietal cortex shift their tuning toward the attended category. These tuning shifts expand the representation of the attended category and of semantically-related but unattended categories, and compress the representation of categories semantically-dissimilar to the target. Attentional warping of semantic representation occurs even when the attended category is not present in the movie, thus the effect is not a target-detection artifact. These results suggest that attention dynamically alters visual representation to optimize processing of behaviorally relevant objects during natural vision. PMID:23603707
Acharya, U Rajendra; Sree, S Vinitha; Krishnan, M Muthu Rama; Molinari, Filippo; Zieleźnik, Witold; Bardales, Ricardo H; Witkowska, Agnieszka; Suri, Jasjit S
2014-02-01
Computer-aided diagnostic (CAD) techniques aid physicians in better diagnosis of diseases by extracting objective and accurate diagnostic information from medical data. Hashimoto thyroiditis is the most common type of inflammation of the thyroid gland. The inflammation changes the structure of the thyroid tissue, and these changes are reflected as echogenic changes on ultrasound images. In this work, we propose a novel CAD system (a class of systems called ThyroScan) that extracts textural features from a thyroid sonogram and uses them to aid in the detection of Hashimoto thyroiditis. In this paradigm, we extracted grayscale features based on stationary wavelet transform from 232 normal and 294 Hashimoto thyroiditis-affected thyroid ultrasound images obtained from a Polish population. Significant features were selected using a Student t test. The resulting feature vectors were used to build and evaluate the following 4 classifiers using a 10-fold stratified cross-validation technique: support vector machine, decision tree, fuzzy classifier, and K-nearest neighbor. Using 7 significant features that characterized the textural changes in the images, the fuzzy classifier had the highest classification accuracy of 84.6%, sensitivity of 82.8%, specificity of 87.0%, and a positive predictive value of 88.9%. The proposed ThyroScan CAD system uses novel features to noninvasively detect the presence of Hashimoto thyroiditis on ultrasound images. Compared to manual interpretations of ultrasound images, the CAD system offers a more objective interpretation of the nature of the thyroid. The preliminary results presented in this work indicate the possibility of using such a CAD system in a clinical setting after evaluating it with larger databases in multicenter clinical trials.
Vision-based algorithms for near-host object detection and multilane sensing
NASA Astrophysics Data System (ADS)
Kenue, Surender K.
1995-01-01
Vision-based sensing can be used for lane sensing, adaptive cruise control, collision warning, and driver performance monitoring functions of intelligent vehicles. Current computer vision algorithms are not robust for handling multiple vehicles in highway scenarios. Several new algorithms are proposed for multi-lane sensing, near-host object detection, vehicle cut-in situations, and specifying regions of interest for object tracking. These algorithms were tested successfully on more than 6000 images taken from real-highway scenes under different daytime lighting conditions.
Region Based CNN for Foreign Object Debris Detection on Airfield Pavement
Cao, Xiaoguang; Wang, Peng; Meng, Cai; Gong, Guoping; Liu, Miaoming; Qi, Jun
2018-01-01
In this paper, a novel algorithm based on convolutional neural network (CNN) is proposed to detect foreign object debris (FOD) based on optical imaging sensors. It contains two modules, the improved region proposal network (RPN) and spatial transformer network (STN) based CNN classifier. In the improved RPN, some extra select rules are designed and deployed to generate high quality candidates with fewer numbers. Moreover, the efficiency of CNN detector is significantly improved by introducing STN layer. Compared to faster R-CNN and single shot multiBox detector (SSD), the proposed algorithm achieves better result for FOD detection on airfield pavement in the experiment. PMID:29494524
Region Based CNN for Foreign Object Debris Detection on Airfield Pavement.
Cao, Xiaoguang; Wang, Peng; Meng, Cai; Bai, Xiangzhi; Gong, Guoping; Liu, Miaoming; Qi, Jun
2018-03-01
In this paper, a novel algorithm based on convolutional neural network (CNN) is proposed to detect foreign object debris (FOD) based on optical imaging sensors. It contains two modules, the improved region proposal network (RPN) and spatial transformer network (STN) based CNN classifier. In the improved RPN, some extra select rules are designed and deployed to generate high quality candidates with fewer numbers. Moreover, the efficiency of CNN detector is significantly improved by introducing STN layer. Compared to faster R-CNN and single shot multiBox detector (SSD), the proposed algorithm achieves better result for FOD detection on airfield pavement in the experiment.
Reference Directions and Reference Objects in Spatial Memory of a Briefly Viewed Layout
ERIC Educational Resources Information Center
Mou, Weimin; Xiao, Chengli; McNamara, Timothy P.
2008-01-01
Two experiments investigated participants' spatial memory of a briefly viewed layout. Participants saw an array of five objects on a table and, after a short delay, indicated whether the target object indicated by the experimenter had been moved. Experiment 1 showed that change detection was more accurate when non-target objects were stationary…
Bae, Seung-Hwan; Yoon, Kuk-Jin
2018-03-01
Online multi-object tracking aims at estimating the tracks of multiple objects instantly with each incoming frame and the information provided up to the moment. It still remains a difficult problem in complex scenes, because of the large ambiguity in associating multiple objects in consecutive frames and the low discriminability between objects appearances. In this paper, we propose a robust online multi-object tracking method that can handle these difficulties effectively. We first define the tracklet confidence using the detectability and continuity of a tracklet, and decompose a multi-object tracking problem into small subproblems based on the tracklet confidence. We then solve the online multi-object tracking problem by associating tracklets and detections in different ways according to their confidence values. Based on this strategy, tracklets sequentially grow with online-provided detections, and fragmented tracklets are linked up with others without any iterative and expensive association steps. For more reliable association between tracklets and detections, we also propose a deep appearance learning method to learn a discriminative appearance model from large training datasets, since the conventional appearance learning methods do not provide rich representation that can distinguish multiple objects with large appearance variations. In addition, we combine online transfer learning for improving appearance discriminability by adapting the pre-trained deep model during online tracking. Experiments with challenging public datasets show distinct performance improvement over other state-of-the-arts batch and online tracking methods, and prove the effect and usefulness of the proposed methods for online multi-object tracking.
3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.
Mateo, Carlos M; Gil, Pablo; Torres, Fernando
2016-05-05
Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.
Real-time Microseismic Processing for Induced Seismicity Hazard Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matzel, Eric M.
Induced seismicity is inherently associated with underground fluid injections. If fluids are injected in proximity to a pre-existing fault or fracture system, the resulting elevated pressures can trigger dynamic earthquake slip, which could both damage surface structures and create new migration pathways. The goal of this research is to develop a fundamentally better approach to geological site characterization and early hazard detection. We combine innovative techniques for analyzing microseismic data with a physics-based inversion model to forecast microseismic cloud evolution. The key challenge is that faults at risk of slipping are often too small to detect during the site characterizationmore » phase. Our objective is to devise fast-running methodologies that will allow field operators to respond quickly to changing subsurface conditions.« less
Extended image differencing for change detection in UAV video mosaics
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang; Schumann, Arne
2014-03-01
Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.
Rodríguez-Canosa, Gonzalo; Giner, Jaime del Cerro; Barrientos, Antonio
2014-01-01
The detection and tracking of mobile objects (DATMO) is progressively gaining importance for security and surveillance applications. This article proposes a set of new algorithms and procedures for detecting and tracking mobile objects by robots that work collaboratively as part of a multirobot system. These surveillance algorithms are conceived of to work with data provided by long distance range sensors and are intended for highly reliable object detection in wide outdoor environments. Contrary to most common approaches, in which detection and tracking are done by an integrated procedure, the approach proposed here relies on a modular structure, in which detection and tracking are carried out independently, and the latter might accept input data from different detection algorithms. Two movement detection algorithms have been developed for the detection of dynamic objects by using both static and/or mobile robots. The solution to the overall problem is based on the use of a Kalman filter to predict the next state of each tracked object. Additionally, new tracking algorithms capable of combining dynamic objects lists coming from either one or various sources complete the solution. The complementary performance of the separated modular structure for detection and identification is evaluated and, finally, a selection of test examples discussed. PMID:24526305
Common and Innovative Visuals: A sparsity modeling framework for video.
Abdolhosseini Moghadam, Abdolreza; Kumar, Mrityunjay; Radha, Hayder
2014-05-02
Efficient video representation models are critical for many video analysis and processing tasks. In this paper, we present a framework based on the concept of finding the sparsest solution to model video frames. To model the spatio-temporal information, frames from one scene are decomposed into two components: (i) a common frame, which describes the visual information common to all the frames in the scene/segment, and (ii) a set of innovative frames, which depicts the dynamic behaviour of the scene. The proposed approach exploits and builds on recent results in the field of compressed sensing to jointly estimate the common frame and the innovative frames for each video segment. We refer to the proposed modeling framework by CIV (Common and Innovative Visuals). We show how the proposed model can be utilized to find scene change boundaries and extend CIV to videos from multiple scenes. Furthermore, the proposed model is robust to noise and can be used for various video processing applications without relying on motion estimation and detection or image segmentation. Results for object tracking, video editing (object removal, inpainting) and scene change detection are presented to demonstrate the efficiency and the performance of the proposed model.
Analysis of dangerous area of single berth oil tanker operations based on CFD
NASA Astrophysics Data System (ADS)
Shi, Lina; Zhu, Faxin; Lu, Jinshu; Wu, Wenfeng; Zhang, Min; Zheng, Hailin
2018-04-01
Based on the single process in the liquid cargo tanker berths in the state as the research object, we analyzed the single berth oil tanker in the process of VOCs diffusion theory, built network model of VOCs diffusion with Gambit preprocessor, set up the simulation boundary conditions and simulated the five detection point sources in specific factors under the influence of VOCs concentration change with time by using Fluent software. We analyzed the dangerous area of single berth oil tanker operations through the diffusion of VOCs, so as to ensure the safe operation of oil tanker.
Real-time people and vehicle detection from UAV imagery
NASA Astrophysics Data System (ADS)
Gaszczak, Anna; Breckon, Toby P.; Han, Jiwan
2011-01-01
A generic and robust approach for the real-time detection of people and vehicles from an Unmanned Aerial Vehicle (UAV) is an important goal within the framework of fully autonomous UAV deployment for aerial reconnaissance and surveillance. Here we present an approach for the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers with secondary confirmation in thermal imagery. Additionally we present a related approach for people detection in thermal imagery based on a similar cascaded classification technique combining additional multivariate Gaussian shape matching. The results presented show the successful detection of vehicle and people under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance of the detector is optimized to reduce the overall false positive rate by aiming at the detection of each object of interest (vehicle/person) at least once in the environment (i.e. per search patter flight path) rather than every object in each image frame. Currently the detection rate for people is ~70% and cars ~80% although the overall episodic object detection rate for each flight pattern exceeds 90%.
Novel CT-based objective imaging biomarkers of long term radiation-induced lung damage.
Veiga, Catarina; Landau, David; Devaraj, Anand; Doel, Tom; White, Jared; Ngai, Yenting; Hawkes, David J; McClelland, Jamie R
2018-06-14
and Purpose: Recent improvements in lung cancer survival have spurred an interest in understanding and minimizing long term radiation-induced lung damage (RILD). However, there is still no objective criteria to quantify RILD leading to variable reporting across centres and trials. We propose a set of objective imaging biomarkers to quantify common radiological findings observed 12-months after lung cancer radiotherapy (RT). Baseline and 12-month CT scans of 27 patients from a phase I/II clinical trial of isotoxic chemoradiation were included in this study. To detect and measure the severity of RILD, twelve quantitative imaging biomarkers were developed. These describe basic CT findings including parenchymal change, volume reduction and pleural change. The imaging biomarkers were implemented as semi-automated image analysis pipelines and assessed against visual assessment of the occurrence of each change. The majority of the biomarkers were measurable in each patient. Their continuous nature allows objective scoring of severity for each patient. For each imaging biomarker the cohort was split into two groups according to the presence or absence of the biomarker by visual assessment, testing the hypothesis that the imaging biomarkers were different in these two groups. All features were statistically significant except for rotation of the main bronchus and diaphragmatic curvature. The majority of the biomarkers were not strongly correlated with each other suggesting that each of the biomarkers is measuring a separate element of RILD pathology. We developed objective CT-based imaging biomarkers that quantify the severity of radiological lung damage after RT. These biomarkers are representative of typical radiological findings of RILD. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Ding, Peng; Zhang, Ye; Deng, Wei-Jian; Jia, Ping; Kuijper, Arjan
2018-07-01
Detection of objects from satellite optical remote sensing images is very important for many commercial and governmental applications. With the development of deep convolutional neural networks (deep CNNs), the field of object detection has seen tremendous advances. Currently, objects in satellite remote sensing images can be detected using deep CNNs. In general, optical remote sensing images contain many dense and small objects, and the use of the original Faster Regional CNN framework does not yield a suitably high precision. Therefore, after careful analysis we adopt dense convoluted networks, a multi-scale representation and various combinations of improvement schemes to enhance the structure of the base VGG16-Net for improving the precision. We propose an approach to reduce the test-time (detection time) and memory requirements. To validate the effectiveness of our approach, we perform experiments using satellite remote sensing image datasets of aircraft and automobiles. The results show that the improved network structure can detect objects in satellite optical remote sensing images more accurately and efficiently.
Multi-Sensor Fusion and Enhancement for Object Detection
NASA Technical Reports Server (NTRS)
Rahman, Zia-Ur
2005-01-01
This was a quick &week effort to investigate the ability to detect changes along the flight path of an unmanned airborne vehicle (UAV) over time. Video was acquired by the UAV during several passes over the same terrain. Concurrently, GPS data and UAV attitude data were also acquired. The purpose of the research was to use information from all of these sources to detect if any change had occurred in the terrain encompassed by the flight path.
A neighboring structure reconstructed matching algorithm based on LARK features
NASA Astrophysics Data System (ADS)
Xue, Taobei; Han, Jing; Zhang, Yi; Bai, Lianfa
2015-11-01
Aimed at the low contrast ratio and high noise of infrared images, and the randomness and ambient occlusion of its objects, this paper presents a neighboring structure reconstructed matching (NSRM) algorithm based on LARK features. The neighboring structure relationships of local window are considered based on a non-negative linear reconstruction method to build a neighboring structure relationship matrix. Then the LARK feature matrix and the NSRM matrix are processed separately to get two different similarity images. By fusing and analyzing the two similarity images, those infrared objects are detected and marked by the non-maximum suppression. The NSRM approach is extended to detect infrared objects with incompact structure. High performance is demonstrated on infrared body set, indicating a lower false detecting rate than conventional methods in complex natural scenes.
Reducing Runway Incursions: Can You Relate?
DOT National Transportation Integrated Search
1992-01-01
Side object detection systems (SODS) are collision warning systems which alert drivers to the presence of traffic alongside their vehicle within defined detection zones. The intent of SODS is to reduce collisions during lane changes and merging maneu...
Al-Janabi, Shahd; Greenberg, Adam S
2016-10-01
The representational basis of attentional selection can be object-based. Various studies have suggested, however, that object-based selection is less robust than spatial selection across experimental paradigms. We sought to examine the manner by which the following factors might explain this variation: Target-Object Integration (targets 'on' vs. part 'of' an object), Attention Distribution (narrow vs. wide), and Object Orientation (horizontal vs. vertical). In Experiment 1, participants discriminated between two targets presented 'on' an object in one session, or presented as a change 'of' an object in another session. There was no spatial cue-thus, attention was initially focused widely-and the objects were horizontal or vertical. We found evidence of object-based selection only when targets constituted a change 'of' an object. Additionally, object orientation modulated the sign of object-based selection: We observed a same-object advantage for horizontal objects, but a same-object cost for vertical objects. In Experiment 2, an informative cue preceded a single target presented 'on' an object or as a change 'of' an object (thus, attention was initially focused narrowly). Unlike in Experiment 1, we found evidence of object-based selection independent of target-object integration. We again found that the sign of selection was modulated by the objects' orientation. This result may reflect a meridian effect, which emerged due to anisotropies in the cortical representations when attention is oriented endogenously. Experiment 3 revealed that object orientation did not modulate object-based selection when attention was oriented exogenously. Our findings suggest that target-object integration, attention distribution, and object orientation modulate object-based selection, but only in combination.
NASA Astrophysics Data System (ADS)
Gohatre, Umakant Bhaskar; Patil, Venkat P.
2018-04-01
In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.
Optimizing a neural network for detection of moving vehicles in video
NASA Astrophysics Data System (ADS)
Fischer, Noëlle M.; Kruithof, Maarten C.; Bouma, Henri
2017-10-01
In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing, trafficking in human beings and smuggling. Many recent benchmarks have shown that convolutional neural networks are performing well in the detection of objects in images. Most deep-learning research effort focuses on classification or detection on single images. However, the detection of dynamic changes (e.g., moving objects, actions and events) in streaming video is extremely relevant for surveillance and forensic applications. In this paper, we combine an end-to-end feedforward neural network for static detection with a recurrent Long Short-Term Memory (LSTM) network for multi-frame analysis. We present a practical guide with special attention to the selection of the optimizer and batch size. The end-to-end network is able to localize and recognize the vehicles in video from traffic cameras. We show an efficient way to collect relevant in-domain data for training with minimal manual labor. Our results show that the combination with LSTM improves performance for the detection of moving vehicles.
Hardman, Kyle O; Cowan, Nelson
2015-03-01
Visual working memory stores stimuli from our environment as representations that can be accessed by high-level control processes. This study addresses a longstanding debate in the literature about whether storage limits in visual working memory include a limit to the complexity of discrete items. We examined the issue with a number of change-detection experiments that used complex stimuli that possessed multiple features per stimulus item. We manipulated the number of relevant features of the stimulus objects in order to vary feature load. In all of our experiments, we found that increased feature load led to a reduction in change-detection accuracy. However, we found that feature load alone could not account for the results but that a consideration of the number of relevant objects was also required. This study supports capacity limits for both feature and object storage in visual working memory. PsycINFO Database Record (c) 2015 APA, all rights reserved.
A theoretical Gaussian framework for anomalous change detection in hyperspectral images
NASA Astrophysics Data System (ADS)
Acito, Nicola; Diani, Marco; Corsini, Giovanni
2017-10-01
Exploitation of temporal series of hyperspectral images is a relatively new discipline that has a wide variety of possible applications in fields like remote sensing, area surveillance, defense and security, search and rescue and so on. In this work, we discuss how images taken at two different times can be processed to detect changes caused by insertion, deletion or displacement of small objects in the monitored scene. This problem is known in the literature as anomalous change detection (ACD) and it can be viewed as the extension, to the multitemporal case, of the well-known anomaly detection problem in a single image. In fact, in both cases, the hyperspectral images are processed blindly in an unsupervised manner and without a-priori knowledge about the target spectrum. We introduce the ACD problem using an approach based on the statistical decision theory and we derive a common framework including different ACD approaches. Particularly, we clearly define the observation space, the data statistical distribution conditioned to the two competing hypotheses and the procedure followed to come with the solution. The proposed overview places emphasis on techniques based on the multivariate Gaussian model that allows a formal presentation of the ACD problem and the rigorous derivation of the possible solutions in a way that is both mathematically more tractable and easier to interpret. We also discuss practical problems related to the application of the detectors in the real world and present affordable solutions. Namely, we describe the ACD processing chain including the strategies that are commonly adopted to compensate pervasive radiometric changes, caused by the different illumination/atmospheric conditions, and to mitigate the residual geometric image co-registration errors. Results obtained on real freely available data are discussed in order to test and compare the methods within the proposed general framework.
CB Database: A change blindness database for objects in natural indoor scenes.
Sareen, Preeti; Ehinger, Krista A; Wolfe, Jeremy M
2016-12-01
Change blindness has been a topic of interest in cognitive sciences for decades. Change detection experiments are frequently used for studying various research topics such as attention and perception. However, creating change detection stimuli is tedious and there is no open repository of such stimuli using natural scenes. We introduce the Change Blindness (CB) Database with object changes in 130 colored images of natural indoor scenes. The size and eccentricity are provided for all the changes as well as reaction time data from a baseline experiment. In addition, we have two specialized satellite databases that are subsets of the 130 images. In one set, changes are seen in rooms or in mirrors in those rooms (Mirror Change Database). In the other, changes occur in a room or out a window (Window Change Database). Both the sets have controlled background, change size, and eccentricity. The CB Database is intended to provide researchers with a stimulus set of natural scenes with defined stimulus parameters that can be used for a wide range of experiments. The CB Database can be found at http://search.bwh.harvard.edu/new/CBDatabase.html .
Monitoring gypsy moth defoliation by applying change detection techniques to Landsat imagery
NASA Technical Reports Server (NTRS)
Williams, D. L.; Stauffer, M. L.
1978-01-01
The overall objective of a research effort at NASA's Goddard Space Flight Center is to develop and evaluate digital image processing techniques that will facilitate the assessment of the intensity and spatial distribution of forest insect damage in Northeastern U.S. forests using remotely sensed data from Landsats 1, 2 and C. Automated change detection techniques are presently being investigated as a method of isolating the areas of change in the forest canopy resulting from pest outbreaks. In order to follow the change detection approach, Landsat scene correction and overlay capabilities are utilized to provide multispectral/multitemporal image files of 'defoliation' and 'nondefoliation' forest stand conditions.
Geographic Object-Based Image Analysis - Towards a new paradigm.
Blaschke, Thomas; Hay, Geoffrey J; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk
2014-01-01
The amount of scientific literature on (Geographic) Object-based Image Analysis - GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the ' per-pixel paradigm ' and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm.
Geographic Object-Based Image Analysis – Towards a new paradigm
Blaschke, Thomas; Hay, Geoffrey J.; Kelly, Maggi; Lang, Stefan; Hofmann, Peter; Addink, Elisabeth; Queiroz Feitosa, Raul; van der Meer, Freek; van der Werff, Harald; van Coillie, Frieke; Tiede, Dirk
2014-01-01
The amount of scientific literature on (Geographic) Object-based Image Analysis – GEOBIA has been and still is sharply increasing. These approaches to analysing imagery have antecedents in earlier research on image segmentation and use GIS-like spatial analysis within classification and feature extraction approaches. This article investigates these development and its implications and asks whether or not this is a new paradigm in remote sensing and Geographic Information Science (GIScience). We first discuss several limitations of prevailing per-pixel methods when applied to high resolution images. Then we explore the paradigm concept developed by Kuhn (1962) and discuss whether GEOBIA can be regarded as a paradigm according to this definition. We crystallize core concepts of GEOBIA, including the role of objects, of ontologies and the multiplicity of scales and we discuss how these conceptual developments support important methods in remote sensing such as change detection and accuracy assessment. The ramifications of the different theoretical foundations between the ‘per-pixel paradigm’ and GEOBIA are analysed, as are some of the challenges along this path from pixels, to objects, to geo-intelligence. Based on several paradigm indications as defined by Kuhn and based on an analysis of peer-reviewed scientific literature we conclude that GEOBIA is a new and evolving paradigm. PMID:24623958
Efficient method of image edge detection based on FSVM
NASA Astrophysics Data System (ADS)
Cai, Aiping; Xiong, Xiaomei
2013-07-01
For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.
NASA Astrophysics Data System (ADS)
Hildreth, E. C.
1985-09-01
For both biological systems and machines, vision begins with a large and unwieldly array of measurements of the amount of light reflected from surfaces in the environment. The goal of vision is to recover physical properties of objects in the scene such as the location of object boundaries and the structure, color and texture of object surfaces, from the two-dimensional image that is projected onto the eye or camera. This goal is not achieved in a single step: vision proceeds in stages, with each stage producing increasingly more useful descriptions of the image and then the scene. The first clues about the physical properties of the scene are provided by the changes of intensity in the image. The importance of intensity changes and edges in early visual processing has led to extensive research on their detection, description and use, both in computer and biological vision systems. This article reviews some of the theory that underlies the detection of edges, and the methods used to carry out this analysis.
Priming effects under correct change detection and change blindness.
Caudek, Corrado; Domini, Fulvio
2013-03-01
In three experiments, we investigated the priming effects induced by an image change on a successive animate/inanimate decision task. We studied both perceptual (Experiments 1 and 2) and conceptual (Experiment 3) priming effects, under correct change detection and change blindness (CB). Under correct change detection, we found larger positive priming effects on congruent trials for probes representing animate entities than for probes representing artifactual objects. Under CB, we found performance impairment relative to a "no-change" baseline condition. This inhibition effect induced by CB was modulated by the semantic congruency between the changed item and the probe in the case of probe images, but not for probe words. We discuss our results in the context of the literature on the negative priming effect. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Alvarez, Mar; Fariña, David; Escuela, Alfonso M.; Sendra, Jose Ramón; Lechuga, Laura M.
2013-01-01
We have developed a hybrid platform that combines two well-known biosensing technologies based on quite different transducer principles: surface plasmon resonance and nanomechanical sensing. The new system allows the simultaneous and real-time detection of two independent parameters, refractive index change (Δn), and surface stress change (Δσ) when a biomolecular interaction takes place. Both parameters have a direct relation with the mass coverage of the sensor surface. The core of the platform is a common fluid cell, where the solution arrives to both sensor areas at the same time and under the same conditions (temperature, velocity, diffusion, etc.).The main objective of this integration is to achieve a better understanding of the physical behaviour of the transducers during sensing, increasing the information obtained in real time in one single experiment. The potential of the hybrid platform is demonstrated by the detection of DNA hybridization.
Alvarez, Mar; Fariña, David; Escuela, Alfonso M; Sendra, Jose Ramón; Lechuga, Laura M
2013-01-01
We have developed a hybrid platform that combines two well-known biosensing technologies based on quite different transducer principles: surface plasmon resonance and nanomechanical sensing. The new system allows the simultaneous and real-time detection of two independent parameters, refractive index change (Δn), and surface stress change (Δσ) when a biomolecular interaction takes place. Both parameters have a direct relation with the mass coverage of the sensor surface. The core of the platform is a common fluid cell, where the solution arrives to both sensor areas at the same time and under the same conditions (temperature, velocity, diffusion, etc.).The main objective of this integration is to achieve a better understanding of the physical behaviour of the transducers during sensing, increasing the information obtained in real time in one single experiment. The potential of the hybrid platform is demonstrated by the detection of DNA hybridization.
Threshold-adaptive canny operator based on cross-zero points
NASA Astrophysics Data System (ADS)
Liu, Boqi; Zhang, Xiuhua; Hong, Hanyu
2018-03-01
Canny edge detection[1] is a technique to extract useful structural information from different vision objects and dramatically reduce the amount of data to be processed. It has been widely applied in various computer vision systems. There are two thresholds have to be settled before the edge is segregated from background. Usually, by the experience of developers, two static values are set as the thresholds[2]. In this paper, a novel automatic thresholding method is proposed. The relation between the thresholds and Cross-zero Points is analyzed, and an interpolation function is deduced to determine the thresholds. Comprehensive experimental results demonstrate the effectiveness of proposed method and advantageous for stable edge detection at changing illumination.
Deep Spatial-Temporal Joint Feature Representation for Video Object Detection.
Zhao, Baojun; Zhao, Boya; Tang, Linbo; Han, Yuqi; Wang, Wenzheng
2018-03-04
With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP).
Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems.
Oh, Sang-Il; Kang, Hang-Bong
2017-01-22
To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226 × 370 image, whereas the original selective search method extracted approximately 10 6 × n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset.
Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems
Oh, Sang-Il; Kang, Hang-Bong
2017-01-01
To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226×370 image, whereas the original selective search method extracted approximately 106×n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset. PMID:28117742
3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands
Mateo, Carlos M.; Gil, Pablo; Torres, Fernando
2016-01-01
Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID:27164102
Detection and clustering of features in aerial images by neuron network-based algorithm
NASA Astrophysics Data System (ADS)
Vozenilek, Vit
2015-12-01
The paper presents the algorithm for detection and clustering of feature in aerial photographs based on artificial neural networks. The presented approach is not focused on the detection of specific topographic features, but on the combination of general features analysis and their use for clustering and backward projection of clusters to aerial image. The basis of the algorithm is a calculation of the total error of the network and a change of weights of the network to minimize the error. A classic bipolar sigmoid was used for the activation function of the neurons and the basic method of backpropagation was used for learning. To verify that a set of features is able to represent the image content from the user's perspective, the web application was compiled (ASP.NET on the Microsoft .NET platform). The main achievements include the knowledge that man-made objects in aerial images can be successfully identified by detection of shapes and anomalies. It was also found that the appropriate combination of comprehensive features that describe the colors and selected shapes of individual areas can be useful for image analysis.
NASA Astrophysics Data System (ADS)
Bai, Ting; Sun, Kaimin; Deng, Shiquan; Chen, Yan
2018-03-01
High resolution image change detection is one of the key technologies of remote sensing application, which is of great significance for resource survey, environmental monitoring, fine agriculture, military mapping and battlefield environment detection. In this paper, for high-resolution satellite imagery, Random Forest (RF), Support Vector Machine (SVM), Deep belief network (DBN), and Adaboost models were established to verify the possibility of different machine learning applications in change detection. In order to compare detection accuracy of four machine learning Method, we applied these four machine learning methods for two high-resolution images. The results shows that SVM has higher overall accuracy at small samples compared to RF, Adaboost, and DBN for binary and from-to change detection. With the increase in the number of samples, RF has higher overall accuracy compared to Adaboost, SVM and DBN.
Ernst, Zachary Raymond; Palmer, John; Boynton, Geoffrey M.
2012-01-01
In object-based attention, it is easier to divide attention between features within a single object than between features across objects. In this study we test the prediction of several capacity models in order to best characterize the cost to dividing attention between objects. Here we studied behavioral performance on a divided attention task in which subjects attended to the motion and luminance of overlapping random dot kinemategrams, specifically red upward moving dots superimposed with green downward moving dots. Subjects were required to detect brief changes (transients) in the motion or luminance within the same surface or across different surfaces. There were two primary results. First, the dual-task deficit was large when attention was divided across two surfaces and near zero when attention was divided within a surface. This is consistent with limited-capacity processing across surfaces and unlimited-capacity processing within a surface—a pattern predicted by established theories of object-based attention. Second and unexpectedly, there was evidence of crosstalk between features: when cued to monitor transients on one surface, response rates were inflated by the presence of a transient on the other surface. Such crosstalk is a failure of selective attention between surfaces. PMID:23149301
Ladar imaging detection of salient map based on PWVD and Rényi entropy
NASA Astrophysics Data System (ADS)
Xu, Yuannan; Zhao, Yuan; Deng, Rong; Dong, Yanbing
2013-10-01
Spatial-frequency information of a given image can be extracted by associating the grey-level spatial data with one of the well-known spatial/spatial-frequency distributions. The Wigner-Ville distribution (WVD) has a good characteristic that the images can be represented in spatial/spatial-frequency domains. For intensity and range images of ladar, through the pseudo Wigner-Ville distribution (PWVD) using one or two dimension window, the statistical property of Rényi entropy is studied. We also analyzed the change of Rényi entropy's statistical property in the ladar intensity and range images when the man-made objects appear. From this foundation, a novel method for generating saliency map based on PWVD and Rényi entropy is proposed. After that, target detection is completed when the saliency map is segmented using a simple and convenient threshold method. For the ladar intensity and range images, experimental results show the proposed method can effectively detect the military vehicles from complex earth background with low false alarm.
NASA Astrophysics Data System (ADS)
Hartung, Christine; Spraul, Raphael; Schuchert, Tobias
2017-10-01
Wide area motion imagery (WAMI) acquired by an airborne multicamera sensor enables continuous monitoring of large urban areas. Each image can cover regions of several square kilometers and contain thousands of vehicles. Reliable vehicle tracking in this imagery is an important prerequisite for surveillance tasks, but remains challenging due to low frame rate and small object size. Most WAMI tracking approaches rely on moving object detections generated by frame differencing or background subtraction. These detection methods fail when objects slow down or stop. Recent approaches for persistent tracking compensate for missing motion detections by combining a detection-based tracker with a second tracker based on appearance or local context. In order to avoid the additional complexity introduced by combining two trackers, we employ an alternative single tracker framework that is based on multiple hypothesis tracking and recovers missing motion detections with a classifierbased detector. We integrate an appearance-based similarity measure, merge handling, vehicle-collision tests, and clutter handling to adapt the approach to the specific context of WAMI tracking. We apply the tracking framework on a region of interest of the publicly available WPAFB 2009 dataset for quantitative evaluation; a comparison to other persistent WAMI trackers demonstrates state of the art performance of the proposed approach. Furthermore, we analyze in detail the impact of different object detection methods and detector settings on the quality of the output tracking results. For this purpose, we choose four different motion-based detection methods that vary in detection performance and computation time to generate the input detections. As detector parameters can be adjusted to achieve different precision and recall performance, we combine each detection method with different detector settings that yield (1) high precision and low recall, (2) high recall and low precision, and (3) best f-score. Comparing the tracking performance achieved with all generated sets of input detections allows us to quantify the sensitivity of the tracker to different types of detector errors and to derive recommendations for detector and parameter choice.
ERIC Educational Resources Information Center
Niemann, Katja; Wolpers, Martin
2015-01-01
In this paper, we introduce a new way of detecting semantic similarities between learning objects by analysing their usage in web portals. Our approach relies on the usage-based relations between the objects themselves rather then on the content of the learning objects or on the relations between users and learning objects. We then take this new…
Track Everything: Limiting Prior Knowledge in Online Multi-Object Recognition.
Wong, Sebastien C; Stamatescu, Victor; Gatt, Adam; Kearney, David; Lee, Ivan; McDonnell, Mark D
2017-10-01
This paper addresses the problem of online tracking and classification of multiple objects in an image sequence. Our proposed solution is to first track all objects in the scene without relying on object-specific prior knowledge, which in other systems can take the form of hand-crafted features or user-based track initialization. We then classify the tracked objects with a fast-learning image classifier, that is based on a shallow convolutional neural network architecture and demonstrate that object recognition improves when this is combined with object state information from the tracking algorithm. We argue that by transferring the use of prior knowledge from the detection and tracking stages to the classification stage, we can design a robust, general purpose object recognition system with the ability to detect and track a variety of object types. We describe our biologically inspired implementation, which adaptively learns the shape and motion of tracked objects, and apply it to the Neovision2 Tower benchmark data set, which contains multiple object types. An experimental evaluation demonstrates that our approach is competitive with the state-of-the-art video object recognition systems that do make use of object-specific prior knowledge in detection and tracking, while providing additional practical advantages by virtue of its generality.
NASA Technical Reports Server (NTRS)
Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.
2017-01-01
Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.
Attention to Attributes and Objects in Working Memory
ERIC Educational Resources Information Center
Cowan, Nelson; Blume, Christopher L.; Saults, J. Scott
2013-01-01
It has been debated on the basis of change-detection procedures whether visual working memory is limited by the number of objects, task-relevant attributes within those objects, or bindings between attributes. This debate, however, has been hampered by several limitations, including the use of conditions that vary between studies and the absence…
Wang, Baofeng; Qi, Zhiquan; Chen, Sizhong; Liu, Zhaodu; Ma, Guocheng
2017-01-01
Vision-based vehicle detection is an important issue for advanced driver assistance systems. In this paper, we presented an improved multi-vehicle detection and tracking method using cascade Adaboost and Adaptive Kalman filter(AKF) with target identity awareness. A cascade Adaboost classifier using Haar-like features was built for vehicle detection, followed by a more comprehensive verification process which could refine the vehicle hypothesis in terms of both location and dimension. In vehicle tracking, each vehicle was tracked with independent identity by an Adaptive Kalman filter in collaboration with a data association approach. The AKF adaptively adjusted the measurement and process noise covariance through on-line stochastic modelling to compensate the dynamics changes. The data association correctly assigned different detections with tracks using global nearest neighbour(GNN) algorithm while considering the local validation. During tracking, a temporal context based track management was proposed to decide whether to initiate, maintain or terminate the tracks of different objects, thus suppressing the sparse false alarms and compensating the temporary detection failures. Finally, the proposed method was tested on various challenging real roads, and the experimental results showed that the vehicle detection performance was greatly improved with higher accuracy and robustness. PMID:28296902
A Generalized Machine Fault Detection Method Using Unified Change Detection
2014-10-02
SOCIETY 2014 11 of the extension shaft. It can be induced by a lack of tightening torque of the end-nut and consequently causes a load...Test Facility (HTTF). The objective of the study was to provide HUMS systems with the capability to detect the loss of tightening torque of the end...from pinion SSA (at Ring-Front sensor & cruise power) change signal with cross-over at 75th shaft order Ten end-nut tightening torques were used in
LiDAR Vegetation Investigation and Signature Analysis System (LVISA)
NASA Astrophysics Data System (ADS)
Höfle, Bernhard; Koenig, Kristina; Griesbaum, Luisa; Kiefer, Andreas; Hämmerle, Martin; Eitel, Jan; Koma, Zsófia
2015-04-01
Our physical environment undergoes constant changes in space and time with strongly varying triggers, frequencies, and magnitudes. Monitoring these environmental changes is crucial to improve our scientific understanding of complex human-environmental interactions and helps us to respond to environmental change by adaptation or mitigation. The three-dimensional (3D) description of the Earth surface features and the detailed monitoring of surface processes using 3D spatial data have gained increasing attention within the last decades, such as in climate change research (e.g., glacier retreat), carbon sequestration (e.g., forest biomass monitoring), precision agriculture and natural hazard management. In all those areas, 3D data have helped to improve our process understanding by allowing quantifying the structural properties of earth surface features and their changes over time. This advancement has been fostered by technological developments and increased availability of 3D sensing systems. In particular, LiDAR (light detection and ranging) technology, also referred to as laser scanning, has made significant progress and has evolved into an operational tool in environmental research and geosciences. The main result of LiDAR measurements is a highly spatially resolved 3D point cloud. Each point within the LiDAR point cloud has a XYZ coordinate associated with it and often additional information such as the strength of the returned backscatter. The point cloud provided by LiDAR contains rich geospatial, structural, and potentially biochemical information about the surveyed objects. To deal with the inherently unorganized datasets and the large data volume (frequently millions of XYZ coordinates) of LiDAR datasets, a multitude of algorithms for automatic 3D object detection (e.g., of single trees) and physical surface description (e.g., biomass) have been developed. However, so far the exchange of datasets and approaches (i.e., extraction algorithms) among LiDAR users lacks behind. We propose a novel concept, the LiDAR Vegetation Investigation and Signature Analysis System (LVISA), which shall enhance sharing of i) reference datasets of single vegetation objects with rich reference data (e.g., plant species, basic plant morphometric information) and ii) approaches for information extraction (e.g., single tree detection, tree species classification based on waveform LiDAR features). We will build an extensive LiDAR data repository for supporting the development and benchmarking of LiDAR-based object information extraction. The LiDAR Vegetation Investigation and Signature Analysis System (LVISA) uses international web service standards (Open Geospatial Consortium, OGC) for geospatial data access and also analysis (e.g., OGC Web Processing Services). This will allow the research community identifying plant object specific vegetation features from LiDAR data, while accounting for differences in LiDAR systems (e.g., beam divergence), settings (e.g., point spacing), and calibration techniques. It is the goal of LVISA to develop generic 3D information extraction approaches, which can be seamlessly transferred to other datasets, timestamps and also extraction tasks. The current prototype of LVISA can be visited and tested online via http://uni-heidelberg.de/lvisa. Video tutorials provide a quick overview and entry into the functionality of LVISA. We will present the current advances of LVISA and we will highlight future research and extension of LVISA, such as integrating low-cost LiDAR data and datasets acquired by highly temporal scanning of vegetation (e.g., continuous measurements). Everybody is invited to join the LVISA development and share datasets and analysis approaches in an interoperable way via the web-based LVISA geoportal.
Detection of blob objects in microscopic zebrafish images based on gradient vector diffusion.
Li, Gang; Liu, Tianming; Nie, Jingxin; Guo, Lei; Malicki, Jarema; Mara, Andrew; Holley, Scott A; Xia, Weiming; Wong, Stephen T C
2007-10-01
The zebrafish has become an important vertebrate animal model for the study of developmental biology, functional genomics, and disease mechanisms. It is also being used for drug discovery. Computerized detection of blob objects has been one of the important tasks in quantitative phenotyping of zebrafish. We present a new automated method that is able to detect blob objects, such as nuclei or cells in microscopic zebrafish images. This method is composed of three key steps. The first step is to produce a diffused gradient vector field by a physical elastic deformable model. In the second step, the flux image is computed on the diffused gradient vector field. The third step performs thresholding and nonmaximum suppression based on the flux image. We report the validation and experimental results of this method using zebrafish image datasets from three independent research labs. Both sensitivity and specificity of this method are over 90%. This method is able to differentiate closely juxtaposed or connected blob objects, with high sensitivity and specificity in different situations. It is characterized by a good, consistent performance in blob object detection.
NASA Astrophysics Data System (ADS)
Adar, S.; Notesco, G.; Brook, A.; Livne, I.; Rojik, P.; Kopacková, V.; Zelenkova, K.; Misurec, J.; Bourguignon, A.; Chevrel, S.; Ehrler, C.; Fisher, C.; Hanus, J.; Shkolnisky, Y.; Ben Dor, E.
2011-11-01
Two HyMap images acquired over the same lignite open-pit mining site in Sokolov, Czech Republic, during the summers of 2009 and 2010 (12 months apart), were investigated in this study. The site selected for this research is one of three test sites (the others being in South Africa and Kyrgyzstan) within the framework of the EO-MINERS FP7 Project (http://www.eo-miners.eu). The goal of EO-MINERS is to "integrate new and existing Earth Observation tools to improve best practice in mining activities and to reduce the mining related environmental and societal footprint". Accordingly, the main objective of the current study was to develop hyperspectral-based means for the detection of small spectral changes and to relate these changes to possible degradation or reclamation indicators of the area under investigation. To ensure significant detection of small spectral changes, the temporal domain was investigated along with careful generation of reflectance information. Thus, intensive spectroradiometric ground measurements were carried out to ensure calibration and validation aspects during both overflights. The performance of these corrections was assessed using the Quality Indicators setup developed under a different FP7 project-EUFAR (http://www.eufar.net), which helped select the highest quality data for further work. This approach allows direct distinction of the real information from noise. The reflectance images were used as input for the application of spectral-based change-detection algorithms and indices to account for small and reliable changes. The related algorithms were then developed and applied on a pixel-by-pixel basis to map spectral changes over the space of a year. Using field spectroscopy and ground truth measurements on both overpass dates, it was possible to explain the results and allocate spatial kinetic processes of the environmental changes during the time elapsed between the flights. It was found, for instance, that significant spectral changes are capable of revealing mineral processes, vegetation status and soil formation long before these are apparent to the naked eye. Further study is being conducted under the above initiative to extend this approach to other mining areas worldwide and to improve the robustness of the developed algorithm.
Kaltner, Sandra; Jansen, Petra
2017-01-01
This erratum reports an error in “Developmental changes in mental rotation: A dissociation between object-based and egocentric transformations” by Sandra Kaltner & Petra Jansen (Advances in Cognitive Psychology, 12, 67-78. doi: 10.5709/acp-0187-y). The error addresses the fact, that regarding developmental changes in object-based and egocentric transformations, there is only a difference found in children. The incorrect version found changes only in the adult group, but not within children or older adults. PMID:29201259
The objects of visuospatial short-term memory: Perceptual organization and change detection.
Nikolova, Atanaska; Macken, Bill
2016-01-01
We used a colour change-detection paradigm where participants were required to remember colours of six equally spaced circles. Items were superimposed on a background so as to perceptually group them within (a) an intact ring-shaped object, (b) a physically segmented but perceptually completed ring-shaped object, or (c) a corresponding background segmented into three arc-shaped objects. A nonpredictive cue at the location of one of the circles was followed by the memory items, which in turn were followed by a test display containing a probe indicating the circle to be judged same/different. Reaction times for correct responses revealed a same-object advantage; correct responses were faster to probes on the same object as the cue than to equidistant probes on a segmented object. This same-object advantage was identical for physically and perceptually completed objects, but was only evident in reaction times, and not in accuracy measures. Not only, therefore, is it important to consider object-level perceptual organization of stimulus elements when assessing the influence of a range of factors (e.g., number and complexity of elements) in visuospatial short-term memory, but a more detailed picture of the structure of information in memory may be revealed by measuring speed as well as accuracy.
The objects of visuospatial short-term memory: Perceptual organization and change detection
Nikolova, Atanaska; Macken, Bill
2016-01-01
We used a colour change-detection paradigm where participants were required to remember colours of six equally spaced circles. Items were superimposed on a background so as to perceptually group them within (a) an intact ring-shaped object, (b) a physically segmented but perceptually completed ring-shaped object, or (c) a corresponding background segmented into three arc-shaped objects. A nonpredictive cue at the location of one of the circles was followed by the memory items, which in turn were followed by a test display containing a probe indicating the circle to be judged same/different. Reaction times for correct responses revealed a same-object advantage; correct responses were faster to probes on the same object as the cue than to equidistant probes on a segmented object. This same-object advantage was identical for physically and perceptually completed objects, but was only evident in reaction times, and not in accuracy measures. Not only, therefore, is it important to consider object-level perceptual organization of stimulus elements when assessing the influence of a range of factors (e.g., number and complexity of elements) in visuospatial short-term memory, but a more detailed picture of the structure of information in memory may be revealed by measuring speed as well as accuracy. PMID:26286369
Chemical Gas Sensors for Aerospace Applications
NASA Technical Reports Server (NTRS)
Hunter, Gary W.; Liu, C. C.
1998-01-01
Chemical sensors often need to be specifically designed (or tailored) to operate in a given environment. It is often the case that a chemical sensor that meets the needs of one application will not function adequately in another application. The more demanding the environment and specialized the requirement, the greater the need to adapt exiting sensor technologies to meet these requirements or, as necessary, develop new sensor technologies. Aerospace (aeronautic and space) applications are particularly challenging since often these applications have specifications which have not previously been the emphasis of commercial suppliers. Further, the chemical sensing needs of aerospace applications have changed over the years to reflect the changing emphasis of society. Three chemical sensing applications of particular interest to the National Aeronautics and Space Administration (NASA) which illustrate these trends are launch vehicle leak detection, emission monitoring, and fire detection. Each of these applications reflects efforts ongoing throughout NASA. As described in NASA's "Three Pillars for Success", a document which outlines NASA's long term response to achieve the nation's priorities in aerospace transportation, agency wide objectives include: improving safety and decreasing the cost of space travel, significantly decreasing the amount of emissions produced by aeronautic engines, and improving the safety of commercial airline travel. As will be discussed below, chemical sensing in leak detection, emission monitoring, and fire detection will help enable the agency to meet these objectives. Each application has vastly different problems associated with the measurement of chemical species. Nonetheless, the development of a common base technology can address the measurement needs of a number of applications.
NASA Astrophysics Data System (ADS)
Gao, Feng; Dong, Junyu; Li, Bo; Xu, Qizhi; Xie, Cui
2016-10-01
Change detection is of high practical value to hazard assessment, crop growth monitoring, and urban sprawl detection. A synthetic aperture radar (SAR) image is the ideal information source for performing change detection since it is independent of atmospheric and sunlight conditions. Existing SAR image change detection methods usually generate a difference image (DI) first and use clustering methods to classify the pixels of DI into changed class and unchanged class. Some useful information may get lost in the DI generation process. This paper proposed an SAR image change detection method based on neighborhood-based ratio (NR) and extreme learning machine (ELM). NR operator is utilized for obtaining some interested pixels that have high probability of being changed or unchanged. Then, image patches centered at these pixels are generated, and ELM is employed to train a model by using these patches. Finally, pixels in both original SAR images are classified by the pretrained ELM model. The preclassification result and the ELM classification result are combined to form the final change map. The experimental results obtained on three real SAR image datasets and one simulated dataset show that the proposed method is robust to speckle noise and is effective to detect change information among multitemporal SAR images.
Multi-Object Tracking with Correlation Filter for Autonomous Vehicle.
Zhao, Dawei; Fu, Hao; Xiao, Liang; Wu, Tao; Dai, Bin
2018-06-22
Multi-object tracking is a crucial problem for autonomous vehicle. Most state-of-the-art approaches adopt the tracking-by-detection strategy, which is a two-step procedure consisting of the detection module and the tracking module. In this paper, we improve both steps. We improve the detection module by incorporating the temporal information, which is beneficial for detecting small objects. For the tracking module, we propose a novel compressed deep Convolutional Neural Network (CNN) feature based Correlation Filter tracker. By carefully integrating these two modules, the proposed multi-object tracking approach has the ability of re-identification (ReID) once the tracked object gets lost. Extensive experiments were performed on the KITTI and MOT2015 tracking benchmarks. Results indicate that our approach outperforms most state-of-the-art tracking approaches.
The Comparison of Visual Working Memory Representations with Perceptual Inputs
Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew
2008-01-01
The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755
Lidar-based door and stair detection from a mobile robot
NASA Astrophysics Data System (ADS)
Bansal, Mayank; Southall, Ben; Matei, Bogdan; Eledath, Jayan; Sawhney, Harpreet
2010-04-01
We present an on-the-move LIDAR-based object detection system for autonomous and semi-autonomous unmanned vehicle systems. In this paper we make several contributions: (i) we describe an algorithm for real-time detection of objects such as doors and stairs in indoor environments; (ii) we describe efficient data structures and algorithms for processing 3D point clouds acquired by laser scanners in a streaming manner, which minimize the memory copying and access. We show qualitative results demonstrating the effectiveness of our approach on runs in an indoor office environment.
Salient object detection based on multi-scale contrast.
Wang, Hai; Dai, Lei; Cai, Yingfeng; Sun, Xiaoqiang; Chen, Long
2018-05-01
Due to the development of deep learning networks, a salient object detection based on deep learning networks, which are used to extract the features, has made a great breakthrough compared to the traditional methods. At present, the salient object detection mainly relies on very deep convolutional network, which is used to extract the features. In deep learning networks, an dramatic increase of network depth may cause more training errors instead. In this paper, we use the residual network to increase network depth and to mitigate the errors caused by depth increase simultaneously. Inspired by image simplification, we use color and texture features to obtain simplified image with multiple scales by means of region assimilation on the basis of super-pixels in order to reduce the complexity of images and to improve the accuracy of salient target detection. We refine the feature on pixel level by the multi-scale feature correction method to avoid the feature error when the image is simplified at the above-mentioned region level. The final full connection layer not only integrates features of multi-scale and multi-level but also works as classifier of salient targets. The experimental results show that proposed model achieves better results than other salient object detection models based on original deep learning networks. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tereshin, Alexander A.; Usilin, Sergey A.; Arlazarov, Vladimir V.
2018-04-01
This paper aims to study the problem of multi-class object detection in video stream with Viola-Jones cascades. An adaptive algorithm for selecting Viola-Jones cascade based on greedy choice strategy in solution of the N-armed bandit problem is proposed. The efficiency of the algorithm on the problem of detection and recognition of the bank card logos in the video stream is shown. The proposed algorithm can be effectively used in documents localization and identification, recognition of road scene elements, localization and tracking of the lengthy objects , and for solving other problems of rigid object detection in a heterogeneous data flows. The computational efficiency of the algorithm makes it possible to use it both on personal computers and on mobile devices based on processors with low power consumption.
Change detection and classification in brain MR images using change vector analysis.
Simões, Rita; Slump, Cornelis
2011-01-01
The automatic detection of longitudinal changes in brain images is valuable in the assessment of disease evolution and treatment efficacy. Most existing change detection methods that are currently used in clinical research to monitor patients suffering from neurodegenerative diseases--such as Alzheimer's--focus on large-scale brain deformations. However, such patients often have other brain impairments, such as infarcts, white matter lesions and hemorrhages, which are typically overlooked by the deformation-based methods. Other unsupervised change detection algorithms have been proposed to detect tissue intensity changes. The outcome of these methods is typically a binary change map, which identifies changed brain regions. However, understanding what types of changes these regions underwent is likely to provide equally important information about lesion evolution. In this paper, we present an unsupervised 3D change detection method based on Change Vector Analysis. We compute and automatically threshold the Generalized Likelihood Ratio map to obtain a binary change map. Subsequently, we perform histogram-based clustering to classify the change vectors. We obtain a Kappa Index of 0.82 using various types of simulated lesions. The classification error is 2%. Finally, we are able to detect and discriminate both small changes and ventricle expansions in datasets from Mild Cognitive Impairment patients.
Kesner, Raymond P; Kirk, Ryan A; Yu, Zhenghui; Polansky, Caitlin; Musso, Nick D
2016-03-01
In order to examine the role of the dorsal dentate gyrus (dDG) in slope (vertical space) recognition and possible pattern separation, various slope (vertical space) degrees were used in a novel exploratory paradigm to measure novelty detection for changes in slope (vertical space) recognition memory and slope memory pattern separation in Experiment 1. The results of the experiment indicate that control rats displayed a slope recognition memory function with a pattern separation process for slope memory that is dependent upon the magnitude of change in slope between study and test phases. In contrast, the dDG lesioned rats displayed an impairment in slope recognition memory, though because there was no significant interaction between the two groups and slope memory, a reliable pattern separation impairment for slope could not be firmly established in the DG lesioned rats. In Experiment 2, in order to determine whether, the dDG plays a role in shades of grey spatial context recognition and possible pattern separation, shades of grey were used in a novel exploratory paradigm to measure novelty detection for changes in the shades of grey context environment. The results of the experiment indicate that control rats displayed a shades of grey-context pattern separation effect across levels of separation of context (shades of grey). In contrast, the DG lesioned rats displayed a significant interaction between the two groups and levels of shades of grey suggesting impairment in a pattern separation function for levels of shades of grey. In Experiment 3 in order to determine whether the dorsal CA3 (dCA3) plays a role in object pattern completion, a new task requiring less training and using a choice that was based on choosing the correct set of objects on a two-choice discrimination task was used. The results indicated that control rats displayed a pattern completion function based on the availability of one, two, three or four cues. In contrast, the dCA3 lesioned rats displayed a significant interaction between the two groups and the number of available objects suggesting impairment in a pattern completion function for object cues. Copyright © 2015 Elsevier Inc. All rights reserved.
Toward Microsatellite Based Space Situational Awareness
NASA Astrophysics Data System (ADS)
Scott, L.; Wallace, B.; Sale, M.; Thorsteinson, S.
2013-09-01
The NEOSSat microsatellite is a dual mission space telescope which will perform asteroid detection and Space Situational Awareness (SSA) observation experiments on deep space, earth orbiting objects. NEOSSat was launched on 25 February 2013 into a 800 dawn-dusk sun synchronous orbit and is currently undergoing satellite commissioning. The microsatellite consists of a small aperture optical telescope, GPS receiver, high performance attitude control system, and stray light rejection baffle designed to reject stray light from the Sun while searching for asteroids with elongations 45 degrees along the ecliptic. The SSA experimental mission, referred to as HEOSS (High Earth Orbit Space Surveillance), will focus on objects in deep space orbits. The HEOSS mission objective is to evaluate the utility of microsatellites to perform catalog maintenance observations of resident space objects in a manner consistent with the needs of the Canadian Forces. The advantages of placing a space surveillance sensor in low Earth orbit are that the observer can conduct observations without the day-night interruption cycle experienced by ground based telescopes, the telescope is insensitive to adverse weather and the system has visibility to deep space resident space objects which are not normally visible from ground based sensors. Also, from a photometric standpoint, the microsatellite is able to conduct observations on objects with a rapidly changing observer position. The possibility of spin axis estimation on geostationary satellites may be possible and an experiment characterize spin axis of distant resident space objects is being planned. Also, HEOSS offers the ability to conduct observations of satellites at high phase angles which can potentially extend the trackable portion of space in which deep space objects' orbits can be monitored. In this paper we describe the HEOSS SSA experimental data processing system and the preliminary findings of the catalog maintenance experiments. The placement of a space based space surveillance sensor in low Earth orbit introduces tasking and image processing complexities such as cosmic ray rejection, scattered light from Earth's limb and unique scheduling limitations due to the observer's rapid positional change and we describe first-look microsatellite space surveillance lessons from this unique orbital vantage point..
Moving Object Detection Using a Parallax Shift Vector Algorithm
NASA Astrophysics Data System (ADS)
Gural, Peter S.; Otto, Paul R.; Tedesco, Edward F.
2018-07-01
There are various algorithms currently in use to detect asteroids from ground-based observatories, but they are generally restricted to linear or mildly curved movement of the target object across the field of view. Space-based sensors in high inclination, low Earth orbits can induce significant parallax in a collected sequence of images, especially for objects at the typical distances of asteroids in the inner solar system. This results in a highly nonlinear motion pattern of the asteroid across the sensor, which requires a more sophisticated search pattern for detection processing. Both the classical pattern matching used in ground-based asteroid search and the more sensitive matched filtering and synthetic tracking techniques, can be adapted to account for highly complex parallax motion. A new shift vector generation methodology is discussed along with its impacts on commonly used detection algorithms, processing load, and responsiveness to asteroid track reporting. The matched filter, template generator, and pattern matcher source code for the software described herein are available via GitHub.
Seelye, Adriana; Mattek, Nora; Sharma, Nicole; Witter, Phelps; Brenner, Ariella; Wild, Katherine; Dodge, Hiroko; Kaye, Jeffrey
2017-01-01
Background Driving is a key functional activity for many older adults, and changes in routine driving may be associated with emerging cognitive decline due to early neurodegenerative disease. Current methods for assessing driving such as self-report are inadequate for identifying and monitoring subtle changes in driving patterns that may be the earliest signals of functional change in developing mild cognitive impairment (MCI). Objective This proof of concept study aimed to establish the feasibility of continuous driving monitoring in a sample of cognitively normal and MCI older adults for an average of 206 days using an unobtrusive driving sensor and demonstrate that derived sensor-based driving metrics could effectively discriminate between MCI and cognitively intact groups. Methods Novel objective driving measures derived from 6 months of routine driving monitoring were examined in older adults with intact cognition (n = 21) and MCI (n = 7) who were enrolled in the Oregon Center for Aging and Technology (ORCATECH) longitudinal assessment program. Results Unobtrusive continuous monitoring of older adults’ routine driving using a driving sensor was feasible and well accepted. MCI participants drove fewer miles and spent less time on the highway per day than cognitively intact participants. MCI drivers showed less day-to-day fluctuations in their driving habits than cognitively intact drivers. Conclusion Sensor-based driving measures are objective, unobtrusive, and can be assessed every time a person drives his or her vehicle to identify clinically meaningful changes in daily driving. This novel methodology has the potential to be useful for the early detection and monitoring of changes in daily functioning within individuals. PMID:28731434
What's the object of object working memory in infancy? Unraveling 'what' and 'how many'.
Kibbe, Melissa M; Leslie, Alan M
2013-06-01
Infants have a bandwidth-limited object working memory (WM) that can both individuate and identify objects in a scene, (answering 'how many?' or 'what?', respectively). Studies of infants' WM for objects have typically looked for limits on either 'how many' or 'what', yielding different estimates of infant capacity. Infants can keep track of about three individuals (regardless of identity), but appear to be much more limited in the number of specific identities they can recall. Why are the limits on 'how many' and 'what' different? Are the limits entirely separate, do they interact, or are they simply two different aspects of the same underlying limit? We sought to unravel these limits in a series of experiments which tested 9- and 12-month-olds' WM for object identities under varying degrees of difficulty. In a violation-of-expectation looking-time task, we hid objects one at a time behind separate screens, and then probed infants' WM for the shape identity of the penultimate object in the sequence. We manipulated the difficulty of the task by varying both the number of objects in hiding locations and the number of means by which infants could detect a shape change to the probed object. We found that 9-month-olds' WM for identities was limited by the number of hiding locations: when the probed object was one of two objects hidden (one in each of two locations), 9-month-olds succeeded, and they did so even though they were given only one means to detect the change. However, when the probed object was one of three objects hidden (one in each of three locations), they failed, even when they were given two means to detect the shape change. Twelve-month-olds, by contrast, succeeded at the most difficult task level. Results show that WM for 'how many' and for 'what' are not entirely separate. Individuated objects are tracked relatively cheaply. Maintaining bindings between indexed objects and identifying featural information incurs a greater attentional/memory cost. This cost reduces with development. We conclude that infant WM supports a small number of featureless object representations that index the current locations of objects. These can have featural information bound to them, but only at substantial cost. Copyright © 2013 Elsevier Inc. All rights reserved.
Multi-Objective Community Detection Based on Memetic Algorithm
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels. PMID:25932646
Multi-objective community detection based on memetic algorithm.
Wu, Peng; Pan, Li
2015-01-01
Community detection has drawn a lot of attention as it can provide invaluable help in understanding the function and visualizing the structure of networks. Since single objective optimization methods have intrinsic drawbacks to identifying multiple significant community structures, some methods formulate the community detection as multi-objective problems and adopt population-based evolutionary algorithms to obtain multiple community structures. Evolutionary algorithms have strong global search ability, but have difficulty in locating local optima efficiently. In this study, in order to identify multiple significant community structures more effectively, a multi-objective memetic algorithm for community detection is proposed by combining multi-objective evolutionary algorithm with a local search procedure. The local search procedure is designed by addressing three issues. Firstly, nondominated solutions generated by evolutionary operations and solutions in dominant population are set as initial individuals for local search procedure. Then, a new direction vector named as pseudonormal vector is proposed to integrate two objective functions together to form a fitness function. Finally, a network specific local search strategy based on label propagation rule is expanded to search the local optimal solutions efficiently. The extensive experiments on both artificial and real-world networks evaluate the proposed method from three aspects. Firstly, experiments on influence of local search procedure demonstrate that the local search procedure can speed up the convergence to better partitions and make the algorithm more stable. Secondly, comparisons with a set of classic community detection methods illustrate the proposed method can find single partitions effectively. Finally, the method is applied to identify hierarchical structures of networks which are beneficial for analyzing networks in multi-resolution levels.
NASA Astrophysics Data System (ADS)
Chaudhary, A.; Payne, T.; Kinateder, K.; Dao, P.; Beecher, E.; Boone, D.; Elliott, B.
The objective of on-line flagging in this paper is to perform interactive assessment of geosynchronous satellites anomalies such as cross-tagging of a satellites in a cluster, solar panel offset change, etc. This assessment will utilize a Bayesian belief propagation procedure and will include automated update of baseline signature data for the satellite, while accounting for the seasonal changes. Its purpose is to enable an ongoing, automated assessment of satellite behavior through its life cycle using the photometry data collected during the synoptic search performed by a ground or space-based sensor as a part of its metrics mission. The change in the satellite features will be reported along with the probabilities of Type I and Type II errors. The objective of adaptive sequential hypothesis testing in this paper is to define future sensor tasking for the purpose of characterization of fine features of the satellite. The tasking will be designed in order to maximize new information with the least number of photometry data points to be collected during the synoptic search by a ground or space-based sensor. Its calculation is based on the utilization of information entropy techniques. The tasking is defined by considering a sequence of hypotheses in regard to the fine features of the satellite. The optimal observation conditions are then ordered in order to maximize new information about a chosen fine feature. The combined objective of on-line flagging and adaptive sequential hypothesis testing is to progressively discover new information about the features of a geosynchronous satellites by leveraging the regular but sparse cadence of data collection during the synoptic search performed by a ground or space-based sensor. Automated Algorithm to Detect Changes in Geostationary Satellite's Configuration and Cross-Tagging Phan Dao, Air Force Research Laboratory/RVB By characterizing geostationary satellites based on photometry and color photometry, analysts can evaluate satellite operational status and affirm its true identity. The process of ingesting photometry data and deriving satellite physical characteristics can be directed by analysts in a batch mode, meaning using a batch of recent data, or by automated algorithms in an on-line mode in which the assessment is updated with each new data point. Tools used for detecting change to satellite's status or identity, whether performed with a human in the loop or automated algorithms, are generally not built to detect with minimum latency and traceable confidence intervals. To alleviate those deficiencies, we investigate the use of Hidden Markov Models (HMM), in a Bayesian Network framework, to infer the hidden state (changed or unchanged) of a three-axis stabilized geostationary satellite using broadband and color photometry. Unlike frequentist statistics which exploit only the stationary statistics of the observables in the database, HMM also exploits the temporal pattern of the observables as well. The algorithm also operates in “learning” mode to gradually evolve the HMM and accommodate natural changes such as due to the seasonal dependence of GEO satellite's light curve. Our technique is designed to operate with missing color data. The version that ingests both panchromatic and color data can accommodate gaps in color photometry data. That attribute is important because while color indices, e.g. Johnson R and B, enhance the belief (probability) of a hidden state, in real world situations, flux data is collected sporadically in an untasked collect, and color data is limited and sometimes absent. Fluxes are measured with experimental error whose effect on the algorithm will be studied. Photometry data in the AFRL's Geo Color Photometry Catalog and Geo Observations with Latitudinal Diversity Simultaneously (GOLDS) data sets are used to simulate a wide variety of operational changes and identity cross tags. The algorithm is tested against simulated sequences of observed magnitudes, mimicking both the cadence of untasked SSN and other ground sensors, occasional operational changes and possible occurrence of cross tags of in-cluster satellites. We would like to show that the on-line algorithm can detect change; sometimes right after the first post-change data point is analyzed, for zero latency. We also want to show the unsupervised “learning” capability that allows the HMM to evolve with time without user's assistance. For example, the users are not required to “label” the true state of the data points.
A Tactile Sensor Using Piezoresistive Beams for Detection of the Coefficient of Static Friction
Okatani, Taiyu; Takahashi, Hidetoshi; Noda, Kentaro; Takahata, Tomoyuki; Matsumoto, Kiyoshi; Shimoyama, Isao
2016-01-01
This paper reports on a tactile sensor using piezoresistive beams for detection of the coefficient of static friction merely by pressing the sensor against an object. The sensor chip is composed of three pairs of piezoresistive beams arranged in parallel and embedded in an elastomer; this sensor is able to measure the vertical and lateral strains of the elastomer. The coefficient of static friction is estimated from the ratio of the fractional resistance changes corresponding to the sensing elements of vertical and lateral strains when the sensor is in contact with an object surface. We applied a normal force on the sensor surface through objects with coefficients of static friction ranging from 0.2 to 1.1. The fractional resistance changes corresponding to vertical and lateral strains were proportional to the applied force. Furthermore, the relationship between these responses changed according to the coefficients of static friction. The experimental result indicated the proposed sensor could determine the coefficient of static friction before a global slip occurs. PMID:27213374
Kim, Nam-Hoon; Hwang, Jin Hwan; Cho, Jaegab; Kim, Jae Seong
2018-06-04
The characteristics of an estuary are determined by various factors as like as tide, wave, river discharge, etc. which also control the water quality of the estuary. Therefore, detecting the changes of characteristics is critical in managing the environmental qualities and pollution and so the locations of monitoring should be selected carefully. The present study proposes a framework to deploy the monitoring systems based on a graphical method of the spatial and temporal optimizations. With the well-validated numerical simulation results, the monitoring locations are determined to capture the changes of water qualities and pollutants depending on the variations of tide, current and freshwater discharge. The deployment strategy to find the appropriate monitoring locations is designed with the constrained optimization method, which finds solutions by constraining the objective function into the feasible regions. The objective and constrained functions are constructed with the interpolation technique such as objective analysis. Even with the smaller number of the monitoring locations, the present method performs well equivalently to the arbitrarily and evenly deployed monitoring system. Copyright © 2018 Elsevier Ltd. All rights reserved.
Microwave imaging of spinning object using orbital angular momentum
NASA Astrophysics Data System (ADS)
Liu, Kang; Li, Xiang; Gao, Yue; Wang, Hongqiang; Cheng, Yongqiang
2017-09-01
The linear Doppler shift used for the detection of a spinning object becomes significantly weakened when the line of sight (LOS) is perpendicular to the object, which will result in the failure of detection. In this paper, a new detection and imaging technique for spinning objects is developed. The rotational Doppler phenomenon is observed by using the microwave carrying orbital angular momentum (OAM). To converge the radiation energy on the area where objects might exist, the generation method of OAM beams is proposed based on the frequency diversity principle, and the imaging model is derived accordingly. The detection method of the rotational Doppler shift and the imaging approach of the azimuthal profiles are proposed, which are verified by proof-of-concept experiments. Simulation and experimental results demonstrate that OAM beams can still be used to obtain the azimuthal profiles of spinning objects even when the LOS is perpendicular to the object. This work remedies the insufficiency in existing microwave sensing technology and offers a new solution to the object identification problem.
NASA Astrophysics Data System (ADS)
Lin, Duo; Feng, Shangyuan; Pan, Jianji; Chen, Yanping; Lin, Juqiang; Sun, Liqing; Chen, Rong
2011-11-01
Surface-enhanced Raman spectroscopy (SERS) is a vibrational spectroscopic technique that is capable of probing the biomolecular changes associated with diseased transformation. The objective of our study was to explore gold nanoparticle based SERS to obtain blood serum biochemical information for non-invasive colorectal cancer detection. SERS measurements were performed on two groups of blood serum samples: one group from patients (n = 38) with pathologically confirmed colorectal cancer and the other group from healthy volunteers (control subjects, n = 45). Tentative assignments of the Raman bands in the measured SERS spectra suggested interesting cancer specific biomolecular changes, including an increase in the relative amounts of nucleic acid, a decrease in the percentage of saccharide and proteins contents in the blood serum of colorectal cancer patients as compared to that of healthy subjects. Principal component analysis (PCA) of the measured SERS spectra separated the spectral features of the two groups into two distinct clusters with little overlaps. Linear discriminate analysis (LDA) based on the PCA generated features differentiated the nasopharyngeal cancer SERS spectra from normal SERS spectra with high sensitivity (97.4%) and specificity (100%). The results from this exploratory study demonstrated that gold nanoparticle based SERS serum analysis combined with PCA-LDA has tremendous potential for the non-invasive detection of colorectal cancers.
NASA Astrophysics Data System (ADS)
Lin, Duo; Feng, Shangyuan; Pan, Jianji; Chen, Yanping; Lin, Juqiang; Sun, Liqing; Chen, Rong
2012-03-01
Surface-enhanced Raman spectroscopy (SERS) is a vibrational spectroscopic technique that is capable of probing the biomolecular changes associated with diseased transformation. The objective of our study was to explore gold nanoparticle based SERS to obtain blood serum biochemical information for non-invasive colorectal cancer detection. SERS measurements were performed on two groups of blood serum samples: one group from patients (n = 38) with pathologically confirmed colorectal cancer and the other group from healthy volunteers (control subjects, n = 45). Tentative assignments of the Raman bands in the measured SERS spectra suggested interesting cancer specific biomolecular changes, including an increase in the relative amounts of nucleic acid, a decrease in the percentage of saccharide and proteins contents in the blood serum of colorectal cancer patients as compared to that of healthy subjects. Principal component analysis (PCA) of the measured SERS spectra separated the spectral features of the two groups into two distinct clusters with little overlaps. Linear discriminate analysis (LDA) based on the PCA generated features differentiated the nasopharyngeal cancer SERS spectra from normal SERS spectra with high sensitivity (97.4%) and specificity (100%). The results from this exploratory study demonstrated that gold nanoparticle based SERS serum analysis combined with PCA-LDA has tremendous potential for the non-invasive detection of colorectal cancers.
Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.
Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung
2018-02-03
A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.
Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor
Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung
2018-01-01
A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods. PMID:29401681
NASA Technical Reports Server (NTRS)
Cheng, Li-Jen (Inventor); Liu, Tsuen-Hsi (Inventor)
1991-01-01
A method and apparatus for detecting and tracking moving objects in a noise environment cluttered with fast- and slow-moving objects and other time-varying background. A pair of phase conjugate light beams carrying the same spatial information commonly cancel each other out through an image subtraction process in a phase conjugate interferometer, wherein gratings are formed in a fast photorefractive phase conjugate mirror material. In the steady state, there is no output. When the optical path of one of the two phase conjugate beams is suddenly changed, the return beam loses its phase conjugate nature and the interferometer is out of balance, resulting in an observable output. The observable output lasts until the phase conjugate nature of the beam has recovered. The observable time of the output signal is roughly equal to the formation time of the grating. If the optical path changing time is slower than the formation time, the change of optical path becomes unobservable, because the index grating can follow the change. Thus, objects traveling at speeds which result in a path changing time which is slower than the formation time are not observable and do not clutter the output image view.
Chapter 17: Bioimage Informatics for Systems Pharmacology
Li, Fuhai; Yin, Zheng; Jin, Guangxu; Zhao, Hong; Wong, Stephen T. C.
2013-01-01
Recent advances in automated high-resolution fluorescence microscopy and robotic handling have made the systematic and cost effective study of diverse morphological changes within a large population of cells possible under a variety of perturbations, e.g., drugs, compounds, metal catalysts, RNA interference (RNAi). Cell population-based studies deviate from conventional microscopy studies on a few cells, and could provide stronger statistical power for drawing experimental observations and conclusions. However, it is challenging to manually extract and quantify phenotypic changes from the large amounts of complex image data generated. Thus, bioimage informatics approaches are needed to rapidly and objectively quantify and analyze the image data. This paper provides an overview of the bioimage informatics challenges and approaches in image-based studies for drug and target discovery. The concepts and capabilities of image-based screening are first illustrated by a few practical examples investigating different kinds of phenotypic changes caEditorsused by drugs, compounds, or RNAi. The bioimage analysis approaches, including object detection, segmentation, and tracking, are then described. Subsequently, the quantitative features, phenotype identification, and multidimensional profile analysis for profiling the effects of drugs and targets are summarized. Moreover, a number of publicly available software packages for bioimage informatics are listed for further reference. It is expected that this review will help readers, including those without bioimage informatics expertise, understand the capabilities, approaches, and tools of bioimage informatics and apply them to advance their own studies. PMID:23633943
Deep Space Wide Area Search Strategies
NASA Astrophysics Data System (ADS)
Capps, M.; McCafferty, J.
There is an urgent need to expand the space situational awareness (SSA) mission beyond catalog maintenance to providing near real-time indications and warnings of emerging events. While building and maintaining a catalog of space objects is essential to SSA, this does not address the threat of uncatalogued and uncorrelated deep space objects. The Air Force therefore has an interest in transformative technologies to scan the geostationary (GEO) belt for uncorrelated space objects. Traditional ground based electro-optical sensors are challenged in simultaneously detecting dim objects while covering large areas of the sky using current CCD technology. Time delayed integration (TDI) scanning has the potential to enable significantly larger coverage rates while maintaining sensitivity for detecting near-GEO objects. This paper investigates strategies of employing TDI sensing technology from a ground based electro-optical telescope, toward providing tactical indications and warnings of deep space threats. We present results of a notional wide area search TDI sensor that scans the GEO belt from three locations: Maui, New Mexico, and Diego Garcia. Deep space objects in the NASA 2030 debris catalog are propagated over multiple nights as an indicative data set to emulate notional uncatalogued near-GEO orbits which may be encountered by the TDI sensor. Multiple scan patterns are designed and simulated, to compare and contrast performance based on 1) efficiency in coverage, 2) number of objects detected, and 3) rate at which detections occur, to enable follow-up observations by other space surveillance network (SSN) sensors. A step-stare approach is also modeled using a dedicated, co-located sensor notionally similar to the Ground-Based Electro-Optical Deep Space Surveillance (GEODSS) tower. Equivalent sensitivities are assumed. This analysis quantifies the relative benefit of TDI scanning for the wide area search mission.
Cippitelli, Andrea; Zook, Michelle; Bell, Lauren; Damadzic, Ruslan; Eskay, Robert L.; Schwandt, Melanie; Heilig, Markus
2010-01-01
Excessive alcohol use leads to neurodegeneration in several brain structures including the hippocampal dentate gyrus and the entorhinal cortex. Cognitive deficits that result are among the most insidious and debilitating consequences of alcoholism. The object exploration task (OET) provides a sensitive measurement of spatial memory impairment induced by hippocampal and cortical damage. In this study, we examine whether the observed neurotoxicity produced by a 4-day binge ethanol treatment results in long-term memory impairment by observing the time course of reactions to spatial change (object configuration) and non-spatial change (object recognition). Wistar rats were assessed for their abilities to detect spatial configuration in the OET at 1 week and 10 weeks following the ethanol treatment, in which ethanol groups received 9–15 g/kg/day and achieved blood alcohol levels over 300 mg/dl. At 1 week, results indicated that the binge alcohol treatment produced impairment in both spatial memory and non-spatial object recognition performance. Unlike the controls, ethanol treated rats did not increase the duration or number of contacts with the displaced object in the spatial memory task, nor did they increase the duration of contacts with the novel object in the object recognition task. After 10 weeks, spatial memory remained impaired in the ethanol treated rats but object recognition ability was recovered. Our data suggest that episodes of binge-like alcohol exposure result in long-term and possibly permanent impairments in memory for the configuration of objects during exploration, whereas the ability to detect non-spatial changes is only temporarily affected. PMID:20849966
NASA Astrophysics Data System (ADS)
Chavis, Christopher
Using commercial digital cameras in conjunction with Unmanned Aerial Systems (UAS) to generate 3-D Digital Surface Models (DSMs) and orthomosaics is emerging as a cost-effective alternative to Light Detection and Ranging (LiDAR). Powerful software applications such as Pix4D and APS can automate the generation of DSM and orthomosaic products from a handful of inputs. However, the accuracy of these models is relatively untested. The objectives of this study were to generate multiple DSM and orthomosaic pairs of the same area using Pix4D and APS from flights of imagery collected with a lightweight UAS. The accuracy of each individual DSM was assessed in addition to the consistency of the method to model one location over a period of time. Finally, this study determined if the DSMs automatically generated using lightweight UAS and commercial digital cameras could be used for detecting changes in elevation and at what scale. Accuracy was determined by comparing DSMs to a series of reference points collected with survey grade GPS. Other GPS points were also used as control points to georeference the products within Pix4D and APS. The effectiveness of the products for change detection was assessed through image differencing and observance of artificially induced, known elevation changes. The vertical accuracy with the optimal data and model is ≈ 25 cm and the highest consistency over repeat flights is a standard deviation of ≈ 5 cm. Elevation change detection based on such UAS imagery and DSM models should be viable for detecting infrastructure change in urban or suburban environments with little dense canopy vegetation.
Cloud Detection of Optical Satellite Images Using Support Vector Machine
NASA Astrophysics Data System (ADS)
Lee, Kuan-Yi; Lin, Chao-Hung
2016-06-01
Cloud covers are generally present in optical remote-sensing images, which limit the usage of acquired images and increase the difficulty of data analysis, such as image compositing, correction of atmosphere effects, calculations of vegetation induces, land cover classification, and land cover change detection. In previous studies, thresholding is a common and useful method in cloud detection. However, a selected threshold is usually suitable for certain cases or local study areas, and it may be failed in other cases. In other words, thresholding-based methods are data-sensitive. Besides, there are many exceptions to control, and the environment is changed dynamically. Using the same threshold value on various data is not effective. In this study, a threshold-free method based on Support Vector Machine (SVM) is proposed, which can avoid the abovementioned problems. A statistical model is adopted to detect clouds instead of a subjective thresholding-based method, which is the main idea of this study. The features used in a classifier is the key to a successful classification. As a result, Automatic Cloud Cover Assessment (ACCA) algorithm, which is based on physical characteristics of clouds, is used to distinguish the clouds and other objects. In the same way, the algorithm called Fmask (Zhu et al., 2012) uses a lot of thresholds and criteria to screen clouds, cloud shadows, and snow. Therefore, the algorithm of feature extraction is based on the ACCA algorithm and Fmask. Spatial and temporal information are also important for satellite images. Consequently, co-occurrence matrix and temporal variance with uniformity of the major principal axis are used in proposed method. We aim to classify images into three groups: cloud, non-cloud and the others. In experiments, images acquired by the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) and images containing the landscapes of agriculture, snow area, and island are tested. Experiment results demonstrate the detection accuracy of the proposed method is better than related methods.
Fast range estimation based on active range-gated imaging for coastal surveillance
NASA Astrophysics Data System (ADS)
Kong, Qingshan; Cao, Yinan; Wang, Xinwei; Tong, Youwan; Zhou, Yan; Liu, Yuliang
2012-11-01
Coastal surveillance is very important because it is useful for search and rescue, illegal immigration, or harbor security and so on. Furthermore, range estimation is critical for precisely detecting the target. Range-gated laser imaging sensor is suitable for high accuracy range especially in night and no moonlight. Generally, before detecting the target, it is necessary to change delay time till the target is captured. There are two operating mode for range-gated imaging sensor, one is passive imaging mode, and the other is gate viewing mode. Firstly, the sensor is passive mode, only capturing scenes by ICCD, once the object appears in the range of monitoring area, we can obtain the course range of the target according to the imaging geometry/projecting transform. Then, the sensor is gate viewing mode, applying micro second laser pulses and sensor gate width, we can get the range of targets by at least two continuous images with trapezoid-shaped range intensity profile. This technique enables super-resolution depth mapping with a reduction of imaging data processing. Based on the first step, we can calculate the rough value and quickly fix delay time which the target is detected. This technique has overcome the depth resolution limitation for 3D active imaging and enables super-resolution depth mapping with a reduction of imaging data processing. By the two steps, we can quickly obtain the distance between the object and sensor.
NASA Astrophysics Data System (ADS)
Amato, Gabriele; Eisank, Clemens; Albrecht, Florian
2017-04-01
Landslide detection from Earth observation imagery is an important preliminary work for landslide mapping, landslide inventories and landslide hazard assessment. In this context, the object-based image analysis (OBIA) concept has been increasingly used over the last decade. Within the framework of the Land@Slide project (Earth observation based landslide mapping: from methodological developments to automated web-based information delivery) a simple, unsupervised, semi-automatic and object-based approach for the detection of shallow landslides has been developed and implemented in the InterIMAGE open-source software. The method was applied to an Alpine case study in western Austria, exploiting spectral information from pansharpened 4-bands WorldView-2 satellite imagery (0.5 m spatial resolution) in combination with digital elevation models. First, we divided the image into sub-images, i.e. tiles, and then we applied the workflow to each of them without changing the parameters. The workflow was implemented as top-down approach: at the image tile level, an over-classification of the potential landslide area was produced; the over-estimated area was re-segmented and re-classified by several processing cycles until most false positive objects have been eliminated. In every step a Baatz algorithm based segmentation generates polygons "candidates" to be landslides. At the same time, the average values of normalized difference vegetation index (NDVI) and brightness are calculated for these polygons; after that, these values are used as thresholds to perform an objects selection in order to improve the quality of the classification results. In combination, also empirically determined values of slope and roughness are used in the selection process. Results for each tile were merged to obtain the landslide map for the test area. For final validation, the landslide map was compared to a geological map and a supervised landslide classification in order to estimate its accuracy. Results for the test area showed that the proposed method is capable of accurately distinguishing landslides from roofs and trees. Implementation of the workflow into InterIMAGE was straightforward. We conclude that the method is able to extract landslides in forested areas, but that there is still room for improvements concerning the extraction in non-forested high-alpine regions.
NASA Astrophysics Data System (ADS)
Vu, Tinh Thi; Kiesel, Jens; Guse, Bjoern; Fohrer, Nicola
2017-04-01
The damming of rivers causes one of the most considerable impacts of our society on the riverine environment. More than 50% of the world's streams and rivers are currently impounded by dams before reaching the oceans. The construction of dams is of high importance in developing and emerging countries, i.e. for power generation and water storage. In the Vietnamese Vu Gia - Thu Bon Catchment (10,350 km2), about 23 dams were built during the last decades and store approximately 2,156 billion m3 of water. The water impoundment in 10 dams in upstream regions amounts to 17 % of the annual discharge volume. It is expected that impacts from these dams have altered the natural flow regime. However, up to now it is unclear how the flow regime was altered. For this, it needs to be investigated at what point in time these changes became significant and detectable. Many approaches exist to detect changes in stationary or consistency of hydrological records using statistical analysis of time series for the pre- and post-dam period. The objective of this study is to reliably detect and assess hydrologic shifts occurring in the discharge regime of an anthropogenically influenced river basin, mainly affected by the construction of dams. To achieve this, we applied nine available change-point tests to detect change in mean, variance and median on the daily and annual discharge records at two main gauges of the basin. The tests yield conflicting results: The majority of tests found abrupt changes that coincide with the damming-period, while others did not. To interpret how significant the changes in discharge regime are, and to which different properties of the time series each test responded, we calculated Indicators of Hydrologic Alteration (IHAs) for the time period before and after the detected change points. From the results, we can deduce, that the change point tests are influenced in different levels by different indicator groups (magnitude, duration, frequency, etc) and that within the indicator groups, some indicators are more sensitive than others. For instance, extreme low-flow, especially 7- and, 30-day minima and mean minimum low flow, as well as the variability of monthly flow are highly-sensitive to most detected change points. Our study clearly shows that, the detected change points depend on which test is chosen. For an objective assessment of change points, it is therefore necessary to explain the change points by calculating differences in IHAs. This analysis can be used to assess which change point method reacts to which type of hydrological change and, more importantly, it can be used to rank the change points according to their overall impact on the discharge regime. This leads to an improved evaluation of hydrologic change-points caused by anthropogenic impacts. Our study clearly shows that, the detected change points depend on which test is chosen. For an objective assessment of change points, it is therefore necessary to explain the change points by calculating differences in IHAs. This analysis can be used to assess which change point method reacts to which type of hydrological change and, more importantly, it can be used to rank the change points according to their overall impact on the discharge regime. This leads to an improved evaluation of hydrologic change-points caused by anthropogenic impacts.
Levin, Daniel T; Drivdahl, Sarah B; Momen, Nausheen; Beck, Melissa R
2002-12-01
Recently, a number of experiments have emphasized the degree to which subjects fail to detect large changes in visual scenes. This finding, referred to as "change blindness," is often considered surprising because many people have the intuition that such changes should be easy to detect. documented this intuition by showing that the majority of subjects believe they would notice changes that are actually very rarely detected. Thus subjects exhibit a metacognitive error we refer to as "change blindness blindness." Here, we test whether CBB is caused by a misestimation of the perceptual experience associated with visual changes and show that it persists even when the pre- and postchange views are separated by long delays. In addition, subjects overestimate their change detection ability both when the relevant changes are illustrated by still pictures, and when they are illustrated using videos showing the changes occurring in real time. We conclude that CBB is a robust phenomenon that cannot be accounted for by failure to understand the specific perceptual experience associated with a change. Copyright 2002 Elsevier Science (USA)
Exploration of Objective Functions for Optimal Placement of Weather Stations
NASA Astrophysics Data System (ADS)
Snyder, A.; Dietterich, T.; Selker, J. S.
2016-12-01
Many regions of Earth lack ground-based sensing of weather variables. For example, most countries in Sub-Saharan Africa do not have reliable weather station networks. This absence of sensor data has many consequences ranging from public safety (poor prediction and detection of severe weather events), to agriculture (lack of crop insurance), to science (reduced quality of world-wide weather forecasts, climate change measurement, etc.). The Trans-African Hydro-Meteorological Observatory (TAHMO.org) project seeks to address these problems by deploying and operating a large network of weather stations throughout Sub-Saharan Africa. To design the TAHMO network, we must determine where to locate each weather station. We can formulate this as the following optimization problem: Determine a set of N sites that jointly optimize the value of an objective function. The purpose of this poster is to propose and assess several objective functions. In addition to standard objectives (e.g., minimizing the summed squared error of interpolated values over the entire region), we consider objectives that minimize the maximum error over the region and objectives that optimize the detection of extreme events. An additional issue is that each station measures more than 10 variables—how should we balance the accuracy of our interpolated maps for each variable? Weather sensors inevitably drift out of calibration or fail altogether. How can we incorporate robustness to failed sensors into our network design? Another important requirement is that the network should make it possible to detect failed sensors by comparing their readings with those of other stations. How can this requirement be met? Finally, we provide an initial assessment of the computational cost of optimizing these various objective functions. We invite everyone to join the discussion at our poster by proposing additional objectives, identifying additional issues to consider, and expanding our bibliography of relevant papers. A prize (derived from grapes grown in Oregon) will be awarded for the most insightful contribution to the discussion!
Infants' Detection of Correlated Features among Social Stimuli: A Precursor to Stereotyping?
ERIC Educational Resources Information Center
Levy, Gary D.; And Others
This study examined the abilities of 10-month-old infants to detect correlations between objects and persons based on the characteristic of gender. A total of 32 infants were habituated to six stimuli in which a picture of a male or female face was paired with one of six objects such as a football or frying pan. Three objects were associated with…
NASA Astrophysics Data System (ADS)
Martinis, Sandro; Clandillon, Stephen; Twele, André; Huber, Claire; Plank, Simon; Maxant, Jérôme; Cao, Wenxi; Caspard, Mathilde; May, Stéphane
2016-04-01
Optical and radar satellite remote sensing have proven to provide essential crisis information in case of natural disasters, humanitarian relief activities and civil security issues in a growing number of cases through mechanisms such as the Copernicus Emergency Management Service (EMS) of the European Commission or the International Charter 'Space and Major Disasters'. The aforementioned programs and initiatives make use of satellite-based rapid mapping services aimed at delivering reliable and accurate crisis information after natural hazards. Although these services are increasingly operational, they need to be continuously updated and improved through research and development (R&D) activities. The principal objective of ASAPTERRA (Advancing SAR and Optical Methods for Rapid Mapping), the ESA-funded R&D project being described here, is to improve, automate and, hence, speed-up geo-information extraction procedures in the context of natural hazards response. This is performed through the development, implementation, testing and validation of novel image processing methods using optical and Synthetic Aperture Radar (SAR) data. The methods are mainly developed based on data of the German radar satellites TerraSAR-X and TanDEM-X, the French satellite missions Pléiades-1A/1B as well as the ESA missions Sentinel-1/2 with the aim to better characterize the potential and limitations of these sensors and their synergy. The resulting algorithms and techniques are evaluated in real case applications during rapid mapping activities. The project is focussed on three types of natural hazards: floods, landslides and fires. Within this presentation an overview of the main methodological developments in each topic is given and demonstrated in selected test areas. The following developments are presented in the context of flood mapping: a fully automated Sentinel-1 based processing chain for detecting open flood surfaces, a method for the improved detection of flooded vegetation in Sentinel-1data using Entropy/Alpha decomposition, unsupervised Wishart Classification, and object-based post-classification as well as semi-automatic approaches for extracting inundated areas and flood traces in rural and urban areas from VHR and HR optical imagery using machine learning techniques. Methodological developments related to fires are the implementation of fast and robust methods for mapping burnt scars using change detection procedures using SAR (Sentinel-1, TerraSAR-X) and HR optical (e.g. SPOT, Sentinel-2) data as well as the extraction of 3D surface and volume change information from Pléiades stereo-pairs. In the context of landslides, fast and transferable change detection procedures based on SAR (TerraSAR-X) and optical (SPOT) data as well methods for extracting the extent of landslides only based on polarimetric VHR SAR (TerraSAR-X) data are presented.
Changes in heat waves indices in Romania over the period 1961-2015
NASA Astrophysics Data System (ADS)
Croitoru, Adina-Eliza; Piticar, Adrian; Ciupertea, Antoniu-Flavius; Roşca, Cristina Florina
2016-11-01
In the last two decades many climate change studies have focused on extreme temperatures as they have a significant impact on environment and society. Among the weather events generated by extreme temperatures, heat waves are some of the most harmful. The main objective of this study was to detect and analyze changes in heat waves in Romania based on daily observation data (maximum and minimum temperature) over the extended summer period (May-Sept) using a set of 10 indices and to explore the spatial patterns of changes. Heat wave data series were derived from daily maximum and minimum temperature data sets recorded in 29 weather stations across Romania over a 55-year period (1961-2015). In this study, the threshold chosen was the 90th percentile calculated based on a 15-day window centered on each calendar day, and for three baseline periods (1961-1990, 1971-2000, and 1981-2010). Two heat wave definitions were considered: at least three consecutive days when maximum temperature exceeds 90th percentile, and at least three consecutive days when minimum temperature exceeds 90th percentile. For each of them, five variables were calculated: amplitude, magnitude, number of events, duration, and frequency. Finally, 10 indices resulted for further analysis. The main results are: most of the indices have statistically significant increasing trends; only one index for one weather station indicated statistically significant decreasing trend; the changes are more intense in case of heat waves detected based on maximum temperature compared to those obtained for heat waves identified based on minimum temperature; western and central regions of Romania are the most exposed to increasing heat waves.
EEG signatures accompanying auditory figure-ground segregation
Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P.; Szerafin, Ágnes; Shinn-Cunningham, Barbara; Winkler, István
2017-01-01
In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased – i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. PMID:27421185
Streak detection and analysis pipeline for space-debris optical images
NASA Astrophysics Data System (ADS)
Virtanen, Jenni; Poikonen, Jonne; Säntti, Tero; Komulainen, Tuomo; Torppa, Johanna; Granvik, Mikael; Muinonen, Karri; Pentikäinen, Hanna; Martikainen, Julia; Näränen, Jyri; Lehti, Jussi; Flohrer, Tim
2016-04-01
We describe a novel data-processing and analysis pipeline for optical observations of moving objects, either of natural (asteroids, meteors) or artificial origin (satellites, space debris). The monitoring of the space object populations requires reliable acquisition of observational data, to support the development and validation of population models and to build and maintain catalogues of orbital elements. The orbital catalogues are, in turn, needed for the assessment of close approaches (for asteroids, with the Earth; for satellites, with each other) and for the support of contingency situations or launches. For both types of populations, there is also increasing interest to detect fainter objects corresponding to the small end of the size distribution. The ESA-funded StreakDet (streak detection and astrometric reduction) activity has aimed at formulating and discussing suitable approaches for the detection and astrometric reduction of object trails, or streaks, in optical observations. Our two main focuses are objects in lower altitudes and space-based observations (i.e., high angular velocities), resulting in long (potentially curved) and faint streaks in the optical images. In particular, we concentrate on single-image (as compared to consecutive frames of the same field) and low-SNR detection of objects. Particular attention has been paid to the process of extraction of all necessary information from one image (segmentation), and subsequently, to efficient reduction of the extracted data (classification). We have developed an automated streak detection and processing pipeline and demonstrated its performance with an extensive database of semisynthetic images simulating streak observations both from ground-based and space-based observing platforms. The average processing time per image is about 13 s for a typical 2k-by-2k image. For long streaks (length >100 pixels), primary targets of the pipeline, the detection sensitivity (true positives) is about 90% for both scenarios for the bright streaks (SNR > 1), while in the low-SNR regime, the sensitivity is still 50% at SNR = 0.5 .
Nonstationary EO/IR Clutter Suppression and Dim Object Tracking
2010-01-01
Brown, A., and Brown, J., Enhanced Algorithms for EO /IR Electronic Stabilization, Clutter Suppression, and Track - Before - Detect for Multiple Low...estimation-suppression and nonlinear filtering-based multiple-object track - before - detect . These algorithms are suitable for integration into...In such cases, it is imperative to develop efficient real or near-real time tracking before detection methods. This paper continues the work started
Computer vision, camouflage breaking and countershading
Tankus, Ariel; Yeshurun, Yehezkel
2008-01-01
Camouflage is frequently used in the animal kingdom in order to conceal oneself from visual detection or surveillance. Many camouflage techniques are based on masking the familiar contours and texture of the subject by superposition of multiple edges on top of it. This work presents an operator, Darg, for the detection of three-dimensional smooth convex (or, equivalently, concave) objects. It can be used to detect curved objects on a relatively flat background, regardless of image edges, contours and texture. We show that a typical camouflage found in some animal species seems to be a ‘countermeasure’ taken against detection that might be based on our method. Detection by Darg is shown to be very robust, from both theoretical considerations and practical examples of real-life images. PMID:18990669
Cox, Jolene A; Beanland, Vanessa; Filtness, Ashleigh J
2017-10-03
The ability to detect changing visual information is a vital component of safe driving. In addition to detecting changing visual information, drivers must also interpret its relevance to safety. Environmental changes considered to have high safety relevance will likely demand greater attention and more timely responses than those considered to have lower safety relevance. The aim of this study was to explore factors that are likely to influence perceptions of risk and safety regarding changing visual information in the driving environment. Factors explored were the environment in which the change occurs (i.e., urban vs. rural), the type of object that changes, and the driver's age, experience, and risk sensitivity. Sixty-three licensed drivers aged 18-70 years completed a hazard rating task, which required them to rate the perceived hazardousness of changing specific elements within urban and rural driving environments. Three attributes of potential hazards were systematically manipulated: the environment (urban, rural); the type of object changed (road sign, car, motorcycle, pedestrian, traffic light, animal, tree); and its inherent safety risk (low risk, high risk). Inherent safety risk was manipulated by either varying the object's placement, on/near or away from the road, or altering an infrastructure element that would require a change to driver behavior. Participants also completed two driving-related risk perception tasks, rating their relative crash risk and perceived risk of aberrant driving behaviors. Driver age was not significantly associated with hazard ratings, but individual differences in perceived risk of aberrant driving behaviors predicted hazard ratings, suggesting that general driving-related risk sensitivity plays a strong role in safety perception. In both urban and rural scenes, there were significant associations between hazard ratings and inherent safety risk, with low-risk changes perceived as consistently less hazardous than high-risk impact changes; however, the effect was larger for urban environments. There were also effects of object type, with certain objects rated as consistently more safety relevant. In urban scenes, changes involving pedestrians were rated significantly more hazardous than all other objects, and in rural scenes, changes involving animals were rated as significantly more hazardous. Notably, hazard ratings were found to be higher in urban compared with rural driving environments, even when changes were matched between environments. This study demonstrates that drivers perceive rural roads as less risky than urban roads, even when similar scenarios occur in both environments. Age did not affect hazard ratings. Instead, the findings suggest that the assessment of risk posed by hazards is influenced more by individual differences in risk sensitivity. This highlights the need for driver education to account for appraisal of hazards' risk and relevance, in addition to hazard detection, when considering factors that promote road safety.
3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging
NASA Astrophysics Data System (ADS)
Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak
2017-10-01
Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.
Yang, Cheng-Ta
2011-12-01
Change detection requires perceptual comparison and decision processes on different features of multiattribute objects. How relative salience between two feature-changes influences the processes has not been addressed. This study used the systems factorial technology to investigate the processes when detecting changes in a Gabor patch with visual inputs from orientation and spatial frequency channels. Two feature-changes were equally salient in Experiment 1, but a frequency-change was more salient than an orientation-change in Experiment 2. Results showed that all four observers adopted parallel self-terminating processing with limited- to unlimited-capacity processing in Experiment 1. In Experiment 2, one observer used parallel self-terminating processing with unlimited-capacity processing, and the others adopted serial self-terminating processing with limited- to unlimited-capacity processing to detect changes. Postexperimental interview revealed that subjective utility of feature information underlay the adoption of a decision strategy. These results highlight that observers alter decision strategies in change detection depending on the relative saliency in change signals, with relative saliency being determined by both physical salience and subjective weight of feature information. When relative salience exists, individual differences in the process characteristics emerge.
Balhara, Yatan Pal Singh; Jain, Raka
2013-01-01
Tobacco use has been associated with various carcinomas including lung, esophagus, larynx, mouth, throat, kidney, bladder, pancreas, stomach, and cervix. Biomarkers such as concentration of cotinine in the blood, urine, or saliva have been used as objective measures to distinguish nonusers and users of tobacco products. A change in the cut-off value of urinary cotinine to detect active tobacco use is associated with a change in sensitivity and sensitivity of detection. The current study aimed at assessing the impact of using different cut-off thresholds of urinary cotinine on sensitivity and specificity of detection of smoking and smokeless tobacco product use among psychiatric patients. All the male subjects attending the psychiatry out-patient department of the tertiary care multispecialty teaching hospital constituted the sample frame for the current study in a cross-sectionally. Quantitative urinary cotinine assay was done by using ELISA kits of Calbiotech. Inc., USA. We used the receiver operating characteristic (ROC) curve to assess the sensitivity and specificity of various cut-off values of urinary cotinine to identify active smokers and users of smokeless tobacco products. ROC analysis of urinary cotinine levels in detection of self-reported smoking provided the area under curve (AUC) of 0.434. Similarly, the ROC analysis of urinary cotinine levels in detection of self-reported smoking revealed AUC of 0.44. The highest sensitivity and specificity of 100% for smoking were detected at the urinary cut-off value greater than or equal to 2.47 ng/ml. The choice of cut-off value of urinary cotinine used to distinguish nonusers form active users of tobacco products impacts the sensitivity as well as specificity of detection.
NASA Astrophysics Data System (ADS)
Zhu, Zhe
2017-08-01
The free and open access to all archived Landsat images in 2008 has completely changed the way of using Landsat data. Many novel change detection algorithms based on Landsat time series have been developed We present a comprehensive review of four important aspects of change detection studies based on Landsat time series, including frequencies, preprocessing, algorithms, and applications. We observed the trend that the more recent the study, the higher the frequency of Landsat time series used. We reviewed a series of image preprocessing steps, including atmospheric correction, cloud and cloud shadow detection, and composite/fusion/metrics techniques. We divided all change detection algorithms into six categories, including thresholding, differencing, segmentation, trajectory classification, statistical boundary, and regression. Within each category, six major characteristics of different algorithms, such as frequency, change index, univariate/multivariate, online/offline, abrupt/gradual change, and sub-pixel/pixel/spatial were analyzed. Moreover, some of the widely-used change detection algorithms were also discussed. Finally, we reviewed different change detection applications by dividing these applications into two categories, change target and change agent detection.
Brady, Timothy F; Konkle, Talia; Oliva, Aude; Alvarez, George A
2009-01-01
A large body of literature has shown that observers often fail to notice significant changes in visual scenes, even when these changes happen right in front of their eyes. For instance, people often fail to notice if their conversation partner is switched to another person, or if large background objects suddenly disappear.1,2 These 'change blindness' studies have led to the inference that the amount of information we remember about each item in a visual scene may be quite low.1 However, in recent work we have demonstrated that long-term memory is capable of storing a massive number of visual objects with significant detail about each item.3 In the present paper we attempt to reconcile these findings by demonstrating that observers do not experience 'change blindness' with the real world objects used in our previous experiment if they are given sufficient time to encode each item. The results reported here suggest that one of the major causes of change blindness for real-world objects is a lack of encoding time or attention to each object (see also refs. 4 and 5).
NASA Astrophysics Data System (ADS)
Keyport, Ren N.; Oommen, Thomas; Martha, Tapas R.; Sajinkumar, K. S.; Gierke, John S.
2018-02-01
A comparative analysis of landslides detected by pixel-based and object-oriented analysis (OOA) methods was performed using very high-resolution (VHR) remotely sensed aerial images for the San Juan La Laguna, Guatemala, which witnessed widespread devastation during the 2005 Hurricane Stan. A 3-band orthophoto of 0.5 m spatial resolution together with a 115 field-based landslide inventory were used for the analysis. A binary reference was assigned with a zero value for landslide and unity for non-landslide pixels. The pixel-based analysis was performed using unsupervised classification, which resulted in 11 different trial classes. Detection of landslides using OOA includes 2-step K-means clustering to eliminate regions based on brightness; elimination of false positives using object properties such as rectangular fit, compactness, length/width ratio, mean difference of objects, and slope angle. Both overall accuracy and F-score for OOA methods outperformed pixel-based unsupervised classification methods in both landslide and non-landslide classes. The overall accuracy for OOA and pixel-based unsupervised classification was 96.5% and 94.3%, respectively, whereas the best F-score for landslide identification for OOA and pixel-based unsupervised methods: were 84.3% and 77.9%, respectively.Results indicate that the OOA is able to identify the majority of landslides with a few false positive when compared to pixel-based unsupervised classification.
Autonomic and Coevolutionary Sensor Networking
NASA Astrophysics Data System (ADS)
Boonma, Pruet; Suzuki, Junichi
(WSNs) applications are often required to balance the tradeoffs among conflicting operational objectives (e.g., latency and power consumption) and operate at an optimal tradeoff. This chapter proposes and evaluates a architecture, called BiSNET/e, which allows WSN applications to overcome this issue. BiSNET/e is designed to support three major types of WSN applications: , and hybrid applications. Each application is implemented as a decentralized group of, which is analogous to a bee colony (application) consisting of bees (agents). Agents collect sensor data or detect an event (a significant change in sensor reading) on individual nodes, and carry sensor data to base stations. They perform these data collection and event detection functionalities by sensing their surrounding network conditions and adaptively invoking behaviors such as pheromone emission, reproduction, migration, swarming and death. Each agent has its own behavior policy, as a set of genes, which defines how to invoke its behaviors. BiSNET/e allows agents to evolve their behavior policies (genes) across generations and autonomously adapt their performance to given objectives. Simulation results demonstrate that, in all three types of applications, agents evolve to find optimal tradeoffs among conflicting objectives and adapt to dynamic network conditions such as traffic fluctuations and node failures/additions. Simulation results also illustrate that, in hybrid applications, data collection agents and event detection agents coevolve to augment their adaptability and performance.
Liu, Jia; Gong, Maoguo; Qin, Kai; Zhang, Puzhao
2018-03-01
We propose an unsupervised deep convolutional coupling network for change detection based on two heterogeneous images acquired by optical sensors and radars on different dates. Most existing change detection methods are based on homogeneous images. Due to the complementary properties of optical and radar sensors, there is an increasing interest in change detection based on heterogeneous images. The proposed network is symmetric with each side consisting of one convolutional layer and several coupling layers. The two input images connected with the two sides of the network, respectively, are transformed into a feature space where their feature representations become more consistent. In this feature space, the different map is calculated, which then leads to the ultimate detection map by applying a thresholding algorithm. The network parameters are learned by optimizing a coupling function. The learning process is unsupervised, which is different from most existing change detection methods based on heterogeneous images. Experimental results on both homogenous and heterogeneous images demonstrate the promising performance of the proposed network compared with several existing approaches.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
NASA Astrophysics Data System (ADS)
Grilli, Stéphan T.; Guérin, Charles-Antoine; Shelby, Michael; Grilli, Annette R.; Moran, Patrick; Grosdidier, Samuel; Insua, Tania L.
2017-08-01
In past work, tsunami detection algorithms (TDAs) have been proposed, and successfully applied to offline tsunami detection, based on analyzing tsunami currents inverted from high-frequency (HF) radar Doppler spectra. With this method, however, the detection of small and short-lived tsunami currents in the most distant radar ranges is challenging due to conflicting requirements on the Doppler spectra integration time and resolution. To circumvent this issue, in Part I of this work, we proposed an alternative TDA, referred to as time correlation (TC) TDA, that does not require inverting currents, but instead detects changes in patterns of correlations of radar signal time series measured in pairs of cells located along the main directions of tsunami propagation (predicted by geometric optics theory); such correlations can be maximized when one signal is time-shifted by the pre-computed long wave propagation time. We initially validated the TC-TDA based on numerical simulations of idealized tsunamis in a simplified geometry. Here, we further develop, extend, and apply the TC algorithm to more realistic tsunami case studies. These are performed in the area West of Vancouver Island, BC, where Ocean Networks Canada recently deployed a HF radar (in Tofino, BC), to detect tsunamis from far- and near-field sources, up to a 110 km range. Two case studies are considered, both simulated using long wave models (1) a far-field seismic, and (2) a near-field landslide, tsunami. Pending the availability of radar data, a radar signal simulator is parameterized for the Tofino HF radar characteristics, in particular its signal-to-noise ratio with range, and combined with the simulated tsunami currents to produce realistic time series of backscattered radar signal from a dense grid of cells. Numerical experiments show that the arrival of a tsunami causes a clear change in radar signal correlation patterns, even at the most distant ranges beyond the continental shelf, thus making an early tsunami detection possible with the TC-TDA. Based on these results, we discuss how the new algorithm could be combined with standard methods proposed earlier, based on a Doppler analysis, to develop a new tsunami detection system based on HF radar data, that could increase warning time. This will be the object of future work, which will be based on actual, rather than simulated, radar data.
Algorithms for Autonomous Plume Detection on Outer Planet Satellites
NASA Astrophysics Data System (ADS)
Lin, Y.; Bunte, M. K.; Saripalli, S.; Greeley, R.
2011-12-01
We investigate techniques for automated detection of geophysical events (i.e., volcanic plumes) from spacecraft images. The algorithms presented here have not been previously applied to detection of transient events on outer planet satellites. We apply Scale Invariant Feature Transform (SIFT) to raw images of Io and Enceladus from the Voyager, Galileo, Cassini, and New Horizons missions. SIFT produces distinct interest points in every image; feature descriptors are reasonably invariant to changes in illumination, image noise, rotation, scaling, and small changes in viewpoint. We classified these descriptors as plumes using the k-nearest neighbor (KNN) algorithm. In KNN, an object is classified by its similarity to examples in a training set of images based on user defined thresholds. Using the complete database of Io images and a selection of Enceladus images where 1-3 plumes were manually detected in each image, we successfully detected 74% of plumes in Galileo and New Horizons images, 95% in Voyager images, and 93% in Cassini images. Preliminary tests yielded some false positive detections; further iterations will improve performance. In images where detections fail, plumes are less than 9 pixels in size or are lost in image glare. We compared the appearance of plumes and illuminated mountain slopes to determine the potential for feature classification. We successfully differentiated features. An advantage over other methods is the ability to detect plumes in non-limb views where they appear in the shadowed part of the surface; improvements will enable detection against the illuminated background surface where gradient changes would otherwise preclude detection. This detection method has potential applications to future outer planet missions for sustained plume monitoring campaigns and onboard automated prioritization of all spacecraft data. The complementary nature of this method is such that it could be used in conjunction with edge detection algorithms to increase effectiveness. We have demonstrated an ability to detect transient events above the planetary limb and on the surface and to distinguish feature classes in spacecraft images.
NASA Astrophysics Data System (ADS)
Liu, Shaoying; King, Michael A.; Brill, Aaron B.; Stabin, Michael G.; Farncombe, Troy H.
2008-02-01
Monte Carlo (MC) is a well-utilized tool for simulating photon transport in single photon emission computed tomography (SPECT) due to its ability to accurately model physical processes of photon transport. As a consequence of this accuracy, it suffers from a relatively low detection efficiency and long computation time. One technique used to improve the speed of MC modeling is the effective and well-established variance reduction technique (VRT) known as forced detection (FD). With this method, photons are followed as they traverse the object under study but are then forced to travel in the direction of the detector surface, whereby they are detected at a single detector location. Another method, called convolution-based forced detection (CFD), is based on the fundamental idea of FD with the exception that detected photons are detected at multiple detector locations and determined with a distance-dependent blurring kernel. In order to further increase the speed of MC, a method named multiple projection convolution-based forced detection (MP-CFD) is presented. Rather than forcing photons to hit a single detector, the MP-CFD method follows the photon transport through the object but then, at each scatter site, forces the photon to interact with a number of detectors at a variety of angles surrounding the object. This way, it is possible to simulate all the projection images of a SPECT simulation in parallel, rather than as independent projections. The result of this is vastly improved simulation time as much of the computation load of simulating photon transport through the object is done only once for all projection angles. The results of the proposed MP-CFD method agrees well with the experimental data in measurements of point spread function (PSF), producing a correlation coefficient (r2) of 0.99 compared to experimental data. The speed of MP-CFD is shown to be about 60 times faster than a regular forced detection MC program with similar results.
INFRARED- BASED BLINK DETECTING GLASSES FOR FACIAL PACING: TOWARDS A BIONIC BLINK
Frigerio, Alice; Hadlock, Tessa A; Murray, Elizabeth H; Heaton, James T
2015-01-01
IMPORTANCE Facial paralysis remains one of the most challenging conditions to effectively manage, often causing life-altering deficits in both function and appearance. Facial rehabilitation via pacing and robotic technology has great yet unmet potential. A critical first step towards reanimating symmetrical facial movement in cases of unilateral paralysis is the detection of healthy movement to use as a trigger for stimulated movement. OBJECTIVE To test a blink detection system that can be attached to standard eyeglasses and used as part of a closed-loop facial pacing system. DESIGN Standard safety glasses were equipped with an infrared (IR) emitter/detector pair oriented horizontally across the palpebral fissure, creating a monitored IR beam that became interrupted when the eyelids closed. SETTING Tertiary care Facial Nerve Center. PARTICIPANTS 24 healthy volunteers. MAIN OUTCOME MEASURE Video-quantified blinking was compared with both IR sensor signal magnitude and rate of change in healthy participants with their gaze in repose, while they shifted gaze from central to far peripheral positions, and during the production of particular facial expressions. RESULTS Blink detection based on signal magnitude achieved 100% sensitivity in forward gaze, but generated false-detections on downward gaze. Calculations of peak rate of signal change (first derivative) typically distinguished blinks from gaze-related lid movements. During forward gaze, 87% of detected blink events were true positives, 11% were false positives, and 2% false negatives. Of the 11% false positives, 6% were associated with partial eyelid closures. During gaze changes, false blink detection occurred 6.3% of the time during lateral eye movements, 10.4% during upward movements, 46.5% during downward movements, and 5.6% for movements from an upward or downward gaze back to the primary gaze. Facial expressions disrupted sensor output if they caused substantial squinting or shifted the glasses. CONCLUSION AND RELEVANCE Our blink detection system provides a reliable, non-invasive indication of eyelid closure using an invisible light beam passing in front of the eye. Future versions will aim to mitigate detection errors by using multiple IR emitter/detector pairs mounted on the glasses, and alternative frame designs may reduce shifting of the sensors relative to the eye during facial movements. PMID:24699708
Ye, Tao; Zhou, Fuqiang
2015-04-10
When imaged by detectors, space targets (including satellites and debris) and background stars have similar point-spread functions, and both objects appear to change as detectors track targets. Therefore, traditional tracking methods cannot separate targets from stars and cannot directly recognize targets in 2D images. Consequently, we propose an autonomous space target recognition and tracking approach using a star sensor technique and a Kalman filter (KF). A two-step method for subpixel-scale detection of star objects (including stars and targets) is developed, and the combination of the star sensor technique and a KF is used to track targets. The experimental results show that the proposed method is adequate for autonomously recognizing and tracking space targets.
NASA Astrophysics Data System (ADS)
Nampak, Haleh; Pradhan, Biswajeet
2016-07-01
Soil erosion is the common land degradation problem worldwide because of its economic and environmental impacts. Therefore, land-use change detection has become one of the major concern to geomorphologists, environmentalists, and land use planners due to its impact on natural ecosystems. The objective of this paper is to evaluate the relationship between land use/cover changes and land degradation in the Cameron highlands (Malaysia) through multi-temporal remotely sensed satellite images and ancillary data. Land clearing in the study area has resulted increased soil erosion due to rainfall events. Also unsustainable development and agriculture, mismanagement and lacking policies contribute to increasing soil erosion rates. The LULC distribution of the study area was mapped for 2005, 2010, and 2015 through SPOT-5 satellite imagery data which were classified based on object-based classification. A soil erosion model was also used within a GIS in order to study the susceptibility of the areas affected by changes to overland flow and rain splash erosion. The model consists of four parameters, namely soil erodibility, slope, vegetation cover and overland flow. The results of this research will be used in the selection of the areas that require mitigation processes which will reduce their degrading potential. Key words: Land degradation, Geospatial, LULC change, Soil erosion modelling, Cameron highlands.
NASA Technical Reports Server (NTRS)
Smith, R. F.; Stanton, K.; Stoop, D.; Brown, D.; Janusz, W.; King, P.
1977-01-01
The objectives of Skylab Experiment M093 were to measure electrocardiographic signals during space flight, to elucidate the electrophysiological basis for the changes observed, and to assess the effect of the change on the human cardiovascular system. Vectorcardiographic methods were used to quantitate changes, standardize data collection, and to facilitate reduction and statistical analysis of data. Since the Skylab missions provided a unique opportunity to study the effects of prolonged weightlessness on human subjects, an effort was made to construct a data base that contained measurements taken with precision and in adequate number to enable conclusions to be made with a high degree of confidence. Standardized exercise loads were incorporated into the experiment protocol to increase the sensitivity of the electrocardiogram for effects of deconditioning and to detect susceptability for arrhythmias.
An Evaluation of Pixel-Based Methods for the Detection of Floating Objects on the Sea Surface
NASA Astrophysics Data System (ADS)
Borghgraef, Alexander; Barnich, Olivier; Lapierre, Fabian; Van Droogenbroeck, Marc; Philips, Wilfried; Acheroy, Marc
2010-12-01
Ship-based automatic detection of small floating objects on an agitated sea surface remains a hard problem. Our main concern is the detection of floating mines, which proved a real threat to shipping in confined waterways during the first Gulf War, but applications include salvaging, search-and-rescue operation, perimeter, or harbour defense. Detection in infrared (IR) is challenging because a rough sea is seen as a dynamic background of moving objects with size order, shape, and temperature similar to those of the floating mine. In this paper we have applied a selection of background subtraction algorithms to the problem, and we show that the recent algorithms such as ViBe and behaviour subtraction, which take into account spatial and temporal correlations within the dynamic scene, significantly outperform the more conventional parametric techniques, with only little prior assumptions about the physical properties of the scene.
Detection technique for artificially illuminated objects in the outer solar system and beyond.
Loeb, Abraham; Turner, Edwin L
2012-04-01
Existing and planned optical telescopes and surveys can detect artificially illuminated objects, comparable in total brightness to a major terrestrial city, at the outskirts of the Solar System. Orbital parameters of Kuiper belt objects (KBOs) are routinely measured to exquisite precisions of<10(-3). Here, we propose to measure the variation of the observed flux F from such objects as a function of their changing orbital distances D. Sunlight-illuminated objects will show a logarithmic slope α ≡ (d log F/d log D)=-4, whereas artificially illuminated objects should exhibit α=-2. The proposed Large Synoptic Survey Telescope (LSST) and other planned surveys will provide superb data and allow measurement of α for thousands of KBOs. If objects with α=-2 are found, follow-up observations could measure their spectra to determine whether they are illuminated by artificial lighting. The search can be extended beyond the Solar System with future generations of telescopes on the ground and in space that would have the capacity to detect phase modulation due to very strong artificial illumination on the nightside of planets as they orbit their parent stars.
Modeling Patterns of Activities using Activity Curves
Dawadi, Prafulla N.; Cook, Diane J.; Schmitter-Edgecombe, Maureen
2016-01-01
Pervasive computing offers an unprecedented opportunity to unobtrusively monitor behavior and use the large amount of collected data to perform analysis of activity-based behavioral patterns. In this paper, we introduce the notion of an activity curve, which represents an abstraction of an individual’s normal daily routine based on automatically-recognized activities. We propose methods to detect changes in behavioral routines by comparing activity curves and use these changes to analyze the possibility of changes in cognitive or physical health. We demonstrate our model and evaluate our change detection approach using a longitudinal smart home sensor dataset collected from 18 smart homes with older adult residents. Finally, we demonstrate how big data-based pervasive analytics such as activity curve-based change detection can be used to perform functional health assessment. Our evaluation indicates that correlations do exist between behavior and health changes and that these changes can be automatically detected using smart homes, machine learning, and big data-based pervasive analytics. PMID:27346990
Modeling Patterns of Activities using Activity Curves.
Dawadi, Prafulla N; Cook, Diane J; Schmitter-Edgecombe, Maureen
2016-06-01
Pervasive computing offers an unprecedented opportunity to unobtrusively monitor behavior and use the large amount of collected data to perform analysis of activity-based behavioral patterns. In this paper, we introduce the notion of an activity curve , which represents an abstraction of an individual's normal daily routine based on automatically-recognized activities. We propose methods to detect changes in behavioral routines by comparing activity curves and use these changes to analyze the possibility of changes in cognitive or physical health. We demonstrate our model and evaluate our change detection approach using a longitudinal smart home sensor dataset collected from 18 smart homes with older adult residents. Finally, we demonstrate how big data-based pervasive analytics such as activity curve-based change detection can be used to perform functional health assessment. Our evaluation indicates that correlations do exist between behavior and health changes and that these changes can be automatically detected using smart homes, machine learning, and big data-based pervasive analytics.
Evaluation of Moving Object Detection Based on Various Input Noise Using Fixed Camera
NASA Astrophysics Data System (ADS)
Kiaee, N.; Hashemizadeh, E.; Zarrinpanjeh, N.
2017-09-01
Detecting and tracking objects in video has been as a research area of interest in the field of image processing and computer vision. This paper evaluates the performance of a novel method for object detection algorithm in video sequences. This process helps us to know the advantage of this method which is being used. The proposed framework compares the correct and wrong detection percentage of this algorithm. This method was evaluated with the collected data in the field of urban transport which include car and pedestrian in fixed camera situation. The results show that the accuracy of the algorithm will decreases because of image resolution reduction.
Mackenzie River Delta morphological change based on Landsat time series
NASA Astrophysics Data System (ADS)
Vesakoski, Jenni-Mari; Alho, Petteri; Gustafsson, David; Arheimer, Berit; Isberg, Kristina
2015-04-01
Arctic rivers are sensitive and yet quite unexplored river systems to which the climate change will impact on. Research has not focused in detail on the fluvial geomorphology of the Arctic rivers mainly due to the remoteness and wideness of the watersheds, problems with data availability and difficult accessibility. Nowadays wide collaborative spatial databases in hydrology as well as extensive remote sensing datasets over the Arctic are available and they enable improved investigation of the Arctic watersheds. Thereby, it is also important to develop and improve methods that enable detecting the fluvio-morphological processes based on the available data. Furthermore, it is essential to reconstruct and improve the understanding of the past fluvial processes in order to better understand prevailing and future fluvial processes. In this study we sum up the fluvial geomorphological change in the Mackenzie River Delta during the last ~30 years. The Mackenzie River Delta (~13 000 km2) is situated in the North Western Territories, Canada where the Mackenzie River enters to the Beaufort Sea, Arctic Ocean near the city of Inuvik. Mackenzie River Delta is lake-rich, productive ecosystem and ecologically sensitive environment. Research objective is achieved through two sub-objectives: 1) Interpretation of the deltaic river channel planform change by applying Landsat time series. 2) Definition of the variables that have impacted the most on detected changes by applying statistics and long hydrological time series derived from Arctic-HYPE model (HYdrologic Predictions for Environment) developed by Swedish Meteorological and Hydrological Institute. According to our satellite interpretation, field observations and statistical analyses, notable spatio-temporal changes have occurred in the morphology of the river channel and delta during the past 30 years. For example, the channels have been developing in braiding and sinuosity. In addition, various linkages between the studied explanatory variables, such as land cover, precipitation, evaporation, discharge, snow mass and temperature, were found. The significance of this research is emphasised by the growing population, increasing tourism, and economic actions in the Arctic mainly due to the ongoing climate change and technological development.
Hematology and immunology studies
NASA Technical Reports Server (NTRS)
Kimzey, S. L.; Fischer, C. L.; Johnson, P. C.; Ritzmann, S. E.; Mengel, C. E.
1975-01-01
The hematology and immunology program conducted in support of the Apollo missions was designed to acquire specific laboratory data relative to the assessment of the health status of the astronauts prior to their commitment to space flight. A second objective was to detect and identify any alterations in the normal functions of the immunohematologic systems which could be attributed to space flight exposure, and to evaluate the significance of these changes relative to man's continuing participation in space flight missions. Specific changes observed during the Gemini Program formed the basis for the major portion of the hematology-immunology test schedule. Additional measurements were included when their contribution to the overall interpretation of the flight data base became apparent.
Optical Observation, Image-processing, and Detection of Space Debris in Geosynchronous Earth Orbit
NASA Astrophysics Data System (ADS)
Oda, H.; Yanagisawa, T.; Kurosaki, H.; Tagawa, M.
2014-09-01
We report on optical observations and an efficient detection method of space debris in the geosynchronous Earth orbit (GEO). We operate our new Australia Remote Observatory (ARO) where an 18 cm optical telescope with a charged-coupled device (CCD) camera covering a 3.14-degree field of view is used for GEO debris survey, and analyse datasets of successive CCD images using the line detection method (Yanagisawa and Nakajima 2005). In our operation, the exposure time of each CCD image is set to be 3 seconds (or 5 seconds), and the time interval of CCD shutter open is about 4.7 seconds (or 6.7 seconds). In the line detection method, a sufficient number of sample objects are taken from each image based on their shape and intensity, which includes not only faint signals but also background noise (we take 500 sample objects from each image in this paper). Then we search a sequence of sample objects aligning in a straight line in the successive images to exclude the noise sample. We succeed in detecting faint signals (down to about 1.8 sigma of background noise) by applying the line detection method to 18 CCD images. As a result, we detected about 300 GEO objects up to magnitude of 15.5 among 5 nights data. We also calculate orbits of objects detected using the Simplified General Perturbations Satellite Orbit Model 4(SGP4), and identify the objects listed in the two-line-element (TLE) data catalogue publicly provided by the U.S. Strategic Command (USSTRATCOM). We found that a certain amount of our detections are new objects that are not contained in the catalogue. We conclude that our ARO and detection method posse a high efficiency detection of GEO objects despite the use of comparatively-inexpensive observation and analysis system. We also describe the image-processing specialized for the detection of GEO objects (not for usual astronomical objects like stars) in this paper.
Knowledge-Based Object Detection in Laser Scanning Point Clouds
NASA Astrophysics Data System (ADS)
Boochs, F.; Karmacharya, A.; Marbs, A.
2012-07-01
Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.
Haptic Edge Detection Through Shear
NASA Astrophysics Data System (ADS)
Platkiewicz, Jonathan; Lipson, Hod; Hayward, Vincent
2016-03-01
Most tactile sensors are based on the assumption that touch depends on measuring pressure. However, the pressure distribution at the surface of a tactile sensor cannot be acquired directly and must be inferred from the deformation field induced by the touched object in the sensor medium. Currently, there is no consensus as to which components of strain are most informative for tactile sensing. Here, we propose that shape-related tactile information is more suitably recovered from shear strain than normal strain. Based on a contact mechanics analysis, we demonstrate that the elastic behavior of a haptic probe provides a robust edge detection mechanism when shear strain is sensed. We used a jamming-based robot gripper as a tactile sensor to empirically validate that shear strain processing gives accurate edge information that is invariant to changes in pressure, as predicted by the contact mechanics study. This result has implications for the design of effective tactile sensors as well as for the understanding of the early somatosensory processing in mammals.
Haptic Edge Detection Through Shear
Platkiewicz, Jonathan; Lipson, Hod; Hayward, Vincent
2016-01-01
Most tactile sensors are based on the assumption that touch depends on measuring pressure. However, the pressure distribution at the surface of a tactile sensor cannot be acquired directly and must be inferred from the deformation field induced by the touched object in the sensor medium. Currently, there is no consensus as to which components of strain are most informative for tactile sensing. Here, we propose that shape-related tactile information is more suitably recovered from shear strain than normal strain. Based on a contact mechanics analysis, we demonstrate that the elastic behavior of a haptic probe provides a robust edge detection mechanism when shear strain is sensed. We used a jamming-based robot gripper as a tactile sensor to empirically validate that shear strain processing gives accurate edge information that is invariant to changes in pressure, as predicted by the contact mechanics study. This result has implications for the design of effective tactile sensors as well as for the understanding of the early somatosensory processing in mammals. PMID:27009331
Haptic Edge Detection Through Shear.
Platkiewicz, Jonathan; Lipson, Hod; Hayward, Vincent
2016-03-24
Most tactile sensors are based on the assumption that touch depends on measuring pressure. However, the pressure distribution at the surface of a tactile sensor cannot be acquired directly and must be inferred from the deformation field induced by the touched object in the sensor medium. Currently, there is no consensus as to which components of strain are most informative for tactile sensing. Here, we propose that shape-related tactile information is more suitably recovered from shear strain than normal strain. Based on a contact mechanics analysis, we demonstrate that the elastic behavior of a haptic probe provides a robust edge detection mechanism when shear strain is sensed. We used a jamming-based robot gripper as a tactile sensor to empirically validate that shear strain processing gives accurate edge information that is invariant to changes in pressure, as predicted by the contact mechanics study. This result has implications for the design of effective tactile sensors as well as for the understanding of the early somatosensory processing in mammals.
A biological hierarchical model based underwater moving object detection.
Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen
2014-01-01
Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results.
A Biological Hierarchical Model Based Underwater Moving Object Detection
Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen
2014-01-01
Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results. PMID:25140194
Design and Evaluation of Perceptual-based Object Group Selection Techniques
NASA Astrophysics Data System (ADS)
Dehmeshki, Hoda
Selecting groups of objects is a frequent task in graphical user interfaces. It is required prior to many standard operations such as deletion, movement, or modification. Conventional selection techniques are lasso, rectangle selection, and the selection and de-selection of items through the use of modifier keys. These techniques may become time-consuming and error-prone when target objects are densely distributed or when the distances between target objects are large. Perceptual-based selection techniques can considerably improve selection tasks when targets have a perceptual structure, for example when arranged along a line. Current methods to detect such groups use ad hoc grouping algorithms that are not based on results from perception science. Moreover, these techniques do not allow selecting groups with arbitrary arrangements or permit modifying a selection. This dissertation presents two domain-independent perceptual-based systems that address these issues. Based on established group detection models from perception research, the proposed systems detect perceptual groups formed by the Gestalt principles of good continuation and proximity. The new systems provide gesture-based or click-based interaction techniques for selecting groups with curvilinear or arbitrary structures as well as clusters. Moreover, the gesture-based system is adapted for the graph domain to facilitate path selection. This dissertation includes several user studies that show the proposed systems outperform conventional selection techniques when targets form salient perceptual groups and are still competitive when targets are semi-structured.
Object detection system based on multimodel saliency maps
NASA Astrophysics Data System (ADS)
Guo, Ya'nan; Luo, Chongfan; Ma, Yide
2017-03-01
Detection of visually salient image regions is extensively applied in computer vision and computer graphics, such as object detection, adaptive compression, and object recognition, but any single model always has its limitations to various images, so in our work, we establish a method based on multimodel saliency maps to detect the object, which intelligently absorbs the merits of various individual saliency detection models to achieve promising results. The method can be roughly divided into three steps: in the first step, we propose a decision-making system to evaluate saliency maps obtained by seven competitive methods and merely select the three most valuable saliency maps; in the second step, we introduce heterogeneous PCNN algorithm to obtain three prime foregrounds; and then a self-designed nonlinear fusion method is proposed to merge these saliency maps; at last, the adaptive improved and simplified PCNN model is used to detect the object. Our proposed method can constitute an object detection system for different occasions, which requires no training, is simple, and highly efficient. The proposed saliency fusion technique shows better performance over a broad range of images and enriches the applicability range by fusing different individual saliency models, this proposed system is worthy enough to be called a strong model. Moreover, the proposed adaptive improved SPCNN model is stemmed from the Eckhorn's neuron model, which is skilled in image segmentation because of its biological background, and in which all the parameters are adaptive to image information. We extensively appraise our algorithm on classical salient object detection database, and the experimental results demonstrate that the aggregation of saliency maps outperforms the best saliency model in all cases, yielding highest precision of 89.90%, better recall rates of 98.20%, greatest F-measure of 91.20%, and lowest mean absolute error value of 0.057, the value of proposed saliency evaluation EHA reaches to 215.287. We deem our method can be wielded to diverse applications in the future.
Understanding of Object Detection Based on CNN Family and YOLO
NASA Astrophysics Data System (ADS)
Du, Juan
2018-04-01
As a key use of image processing, object detection has boomed along with the unprecedented advancement of Convolutional Neural Network (CNN) and its variants since 2012. When CNN series develops to Faster Region with CNN (R-CNN), the Mean Average Precision (mAP) has reached 76.4, whereas, the Frame Per Second (FPS) of Faster R-CNN remains 5 to 18 which is far slower than the real-time effect. Thus, the most urgent requirement of object detection improvement is to accelerate the speed. Based on the general introduction to the background and the core solution CNN, this paper exhibits one of the best CNN representatives You Only Look Once (YOLO), which breaks through the CNN family’s tradition and innovates a complete new way of solving the object detection with most simple and high efficient way. Its fastest speed has achieved the exciting unparalleled result with FPS 155, and its mAP can also reach up to 78.6, both of which have surpassed the performance of Faster R-CNN greatly. Additionally, compared with the latest most advanced solution, YOLOv2 achieves an excellent tradeoff between speed and accuracy as well as an object detector with strong generalization ability to represent the whole image.
Shape-based human detection for threat assessment
NASA Astrophysics Data System (ADS)
Lee, Dah-Jye; Zhan, Pengcheng; Thomas, Aaron; Schoenberger, Robert B.
2004-07-01
Detection of intrusions for early threat assessment requires the capability of distinguishing whether the intrusion is a human, an animal, or other objects. Most low-cost security systems use simple electronic motion detection sensors to monitor motion or the location of objects within the perimeter. Although cost effective, these systems suffer from high rates of false alarm, especially when monitoring open environments. Any moving objects including animals can falsely trigger the security system. Other security systems that utilize video equipment require human interpretation of the scene in order to make real-time threat assessment. Shape-based human detection technique has been developed for accurate early threat assessments for open and remote environment. Potential threats are isolated from the static background scene using differential motion analysis and contours of the intruding objects are extracted for shape analysis. Contour points are simplified by removing redundant points connecting short and straight line segments and preserving only those with shape significance. Contours are represented in tangent space for comparison with shapes stored in database. Power cepstrum technique has been developed to search for the best matched contour in database and to distinguish a human from other objects from different viewing angles and distances.
Chemical detection system and related methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caffrey, Augustine J.; Chichester, David L.; Egger, Ann E.
2017-06-27
A chemical detection system includes a frame, an emitter coupled to the frame, and a detector coupled to the frame proximate the emitter. The system also includes a shielding system coupled to the frame and positioned at least partially between the emitter and the detector, wherein the frame positions a sensing surface of the detector in a direction substantially parallel to a plane extending along a front portion of the frame. A method of analyzing composition of a suspect object includes directing neutrons at the object, detecting gamma rays emitted from the object, and communicating spectrometer information regarding the gammamore » rays. The method also includes presenting a GUI to a user with a dynamic status of an ongoing neutron spectroscopy process. The dynamic status includes a present confidence for a plurality of compounds being present in the suspect object responsive to changes in the spectrometer information during the ongoing process.« less
McAnally, Ken I.; Morris, Adam P.; Best, Christopher
2017-01-01
Metacognitive monitoring and control of situation awareness (SA) are important for a range of safety-critical roles (e.g., air traffic control, military command and control). We examined the factors affecting these processes using a visual change detection task that included representative tactical displays. SA was assessed by asking novice observers to detect changes to a tactical display. Metacognitive monitoring was assessed by asking observers to estimate the probability that they would correctly detect a change, either after study of the display and before the change (judgement of learning; JOL) or after the change and detection response (judgement of performance; JOP). In Experiment 1, observers failed to detect some changes to the display, indicating imperfect SA, but JOPs were reasonably well calibrated to objective performance. Experiment 2 examined JOLs and JOPs in two task contexts: with study-time limits imposed by the task or with self-pacing to meet specified performance targets. JOPs were well calibrated in both conditions as were JOLs for high performance targets. In summary, observers had limited SA, but good insight about their performance and learning for high performance targets and allocated study time appropriately. PMID:28915244
Enhanced detection and visualization of anomalies in spectral imagery
NASA Astrophysics Data System (ADS)
Basener, William F.; Messinger, David W.
2009-05-01
Anomaly detection algorithms applied to hyperspectral imagery are able to reliably identify man-made objects from a natural environment based on statistical/geometric likelyhood. The process is more robust than target identification, which requires precise prior knowledge of the object of interest, but has an inherently higher false alarm rate. Standard anomaly detection algorithms measure deviation of pixel spectra from a parametric model (either statistical or linear mixing) estimating the image background. The topological anomaly detector (TAD) creates a fully non-parametric, graph theory-based, topological model of the image background and measures deviation from this background using codensity. In this paper we present a large-scale comparative test of TAD against 80+ targets in four full HYDICE images using the entire canonical target set for generation of ROC curves. TAD will be compared against several statistics-based detectors including local RX and subspace RX. Even a perfect anomaly detection algorithm would have a high practical false alarm rate in most scenes simply because the user/analyst is not interested in every anomalous object. To assist the analyst in identifying and sorting objects of interest, we investigate coloring of the anomalies with principle components projections using statistics computed from the anomalies. This gives a very useful colorization of anomalies in which objects of similar material tend to have the same color, enabling an analyst to quickly sort and identify anomalies of highest interest.
NASA Astrophysics Data System (ADS)
Griffiths, D.; Boehm, J.
2018-05-01
With deep learning approaches now out-performing traditional image processing techniques for image understanding, this paper accesses the potential of rapid generation of Convolutional Neural Networks (CNNs) for applied engineering purposes. Three CNNs are trained on 275 UAS-derived and freely available online images for object detection of 3m2 segments of railway track. These includes two models based on the Faster RCNN object detection algorithm (Resnet and Incpetion-Resnet) as well as the novel onestage Focal Loss network architecture (Retinanet). Model performance was assessed with respect to three accuracy metrics. The first two consisted of Intersection over Union (IoU) with thresholds 0.5 and 0.1. The last assesses accuracy based on the proportion of track covered by object detection proposals against total track length. In under six hours of training (and two hours of manual labelling) the models detected 91.3 %, 83.1 % and 75.6 % of track in the 500 test images acquired from the UAS survey Retinanet, Resnet and Inception-Resnet respectively. We then discuss the potential for such applications of such systems within the engineering field for a range of scenarios.
Transistor-based particle detection systems and methods
Jain, Ankit; Nair, Pradeep R.; Alam, Muhammad Ashraful
2015-06-09
Transistor-based particle detection systems and methods may be configured to detect charged and non-charged particles. Such systems may include a supporting structure contacting a gate of a transistor and separating the gate from a dielectric of the transistor, and the transistor may have a near pull-in bias and a sub-threshold region bias to facilitate particle detection. The transistor may be configured to change current flow through the transistor in response to a change in stiffness of the gate caused by securing of a particle to the gate, and the transistor-based particle detection system may configured to detect the non-charged particle at least from the change in current flow.