A comparison of moving object detection methods for real-time moving object detection
NASA Astrophysics Data System (ADS)
Roshan, Aditya; Zhang, Yun
2014-06-01
Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.
A New Moving Object Detection Method Based on Frame-difference and Background Subtraction
NASA Astrophysics Data System (ADS)
Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong
2017-09-01
Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.
A-Track: Detecting Moving Objects in FITS images
NASA Astrophysics Data System (ADS)
Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.
2017-04-01
A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.
Detection and Tracking of Moving Objects with Real-Time Onboard Vision System
NASA Astrophysics Data System (ADS)
Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.
2017-05-01
Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.
Moving object detection using dynamic motion modelling from UAV aerial images.
Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid
2014-01-01
Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.
Shadow detection of moving objects based on multisource information in Internet of things
NASA Astrophysics Data System (ADS)
Ma, Zhen; Zhang, De-gan; Chen, Jie; Hou, Yue-xian
2017-05-01
Moving object detection is an important part in intelligent video surveillance under the banner of Internet of things. The detection of moving target's shadow is also an important step in moving object detection. On the accuracy of shadow detection will affect the detection results of the object directly. Based on the variety of shadow detection method, we find that only using one feature can't make the result of detection accurately. Then we present a new method for shadow detection which contains colour information, the invariance of optical and texture feature. Through the comprehensive analysis of the detecting results of three kinds of information, the shadow was effectively determined. It gets ideal effect in the experiment when combining advantages of various methods.
Real-time object detection, tracking and occlusion reasoning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Divakaran, Ajay; Yu, Qian; Tamrakar, Amir
A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.
Research on moving object detection based on frog's eyes
NASA Astrophysics Data System (ADS)
Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan
2008-12-01
On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.
Moving Object Detection on a Vehicle Mounted Back-Up Camera
Kim, Dong-Sun; Kwon, Jinsan
2015-01-01
In the detection of moving objects from vision sources one usually assumes that the scene has been captured by stationary cameras. In case of backing up a vehicle, however, the camera mounted on the vehicle moves according to the vehicle’s movement, resulting in ego-motions on the background. This results in mixed motion in the scene, and makes it difficult to distinguish between the target objects and background motions. Without further treatments on the mixed motion, traditional fixed-viewpoint object detection methods will lead to many false-positive detection results. In this paper, we suggest a procedure to be used with the traditional moving object detection methods relaxing the stationary cameras restriction, by introducing additional steps before and after the detection. We also decribe the implementation as a FPGA platform along with the algorithm. The target application of this suggestion is use with a road vehicle’s rear-view camera systems. PMID:26712761
Lee, Young-Sook; Chung, Wan-Young
2012-01-01
Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486
Acoustic facilitation of object movement detection during self-motion
Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.
2011-01-01
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050
A-Track: A new approach for detection of moving objects in FITS images
NASA Astrophysics Data System (ADS)
Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.
2016-10-01
We have developed a fast, open-source, cross-platform pipeline, called A-Track, for detecting the moving objects (asteroids and comets) in sequential telescope images in FITS format. The pipeline is coded in Python 3. The moving objects are detected using a modified line detection algorithm, called MILD. We tested the pipeline on astronomical data acquired by an SI-1100 CCD with a 1-meter telescope. We found that A-Track performs very well in terms of detection efficiency, stability, and processing time. The code is hosted on GitHub under the GNU GPL v3 license.
Detection of dominant flow and abnormal events in surveillance video
NASA Astrophysics Data System (ADS)
Kwak, Sooyeong; Byun, Hyeran
2011-02-01
We propose an algorithm for abnormal event detection in surveillance video. The proposed algorithm is based on a semi-unsupervised learning method, a kind of feature-based approach so that it does not detect the moving object individually. The proposed algorithm identifies dominant flow without individual object tracking using a latent Dirichlet allocation model in crowded environments. It can also automatically detect and localize an abnormally moving object in real-life video. The performance tests are taken with several real-life databases, and their results show that the proposed algorithm can efficiently detect abnormally moving objects in real time. The proposed algorithm can be applied to any situation in which abnormal directions or abnormal speeds are detected regardless of direction.
Optimizing a neural network for detection of moving vehicles in video
NASA Astrophysics Data System (ADS)
Fischer, Noëlle M.; Kruithof, Maarten C.; Bouma, Henri
2017-10-01
In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing, trafficking in human beings and smuggling. Many recent benchmarks have shown that convolutional neural networks are performing well in the detection of objects in images. Most deep-learning research effort focuses on classification or detection on single images. However, the detection of dynamic changes (e.g., moving objects, actions and events) in streaming video is extremely relevant for surveillance and forensic applications. In this paper, we combine an end-to-end feedforward neural network for static detection with a recurrent Long Short-Term Memory (LSTM) network for multi-frame analysis. We present a practical guide with special attention to the selection of the optimizer and batch size. The end-to-end network is able to localize and recognize the vehicles in video from traffic cameras. We show an efficient way to collect relevant in-domain data for training with minimal manual labor. Our results show that the combination with LSTM improves performance for the detection of moving vehicles.
2017-03-01
A Low- Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller Jong Hwan Ko...Atlanta, GA 30332 USA Contact Author Email: jonghwan.ko@gatech.edu Abstract: This paper presents a low- power wireless image sensor node for...present a low- power wireless image sensor node with a noise-robust moving object detection and region-of-interest based rate controller [Fig. 1]. The
Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space
Chen, Min; Hashimoto, Koichi
2017-01-01
Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189
Real-time detection of moving objects from moving vehicles using dense stereo and optical flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Real-time detection of moving objects from moving vehicles using dense stereo and optical flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Moving object detection and tracking in videos through turbulent medium
NASA Astrophysics Data System (ADS)
Halder, Kalyan Kumar; Tahtali, Murat; Anavatti, Sreenatha G.
2016-06-01
This paper addresses the problem of identifying and tracking moving objects in a video sequence having a time-varying background. This is a fundamental task in many computer vision applications, though a very challenging one because of turbulence that causes blurring and spatiotemporal movements of the background images. Our proposed approach involves two major steps. First, a moving object detection algorithm that deals with the detection of real motions by separating the turbulence-induced motions using a two-level thresholding technique is used. In the second step, a feature-based generalized regression neural network is applied to track the detected objects throughout the frames in the video sequence. The proposed approach uses the centroid and area features of the moving objects and creates the reference regions instantly by selecting the objects within a circle. Simulation experiments are carried out on several turbulence-degraded video sequences and comparisons with an earlier method confirms that the proposed approach provides a more effective tracking of the targets.
Searching for moving objects in HSC-SSP: Pipeline and preliminary results
NASA Astrophysics Data System (ADS)
Chen, Ying-Tung; Lin, Hsing-Wen; Alexandersen, Mike; Lehner, Matthew J.; Wang, Shiang-Yu; Wang, Jen-Hung; Yoshida, Fumi; Komiyama, Yutaka; Miyazaki, Satoshi
2018-01-01
The Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) is currently the deepest wide-field survey in progress. The 8.2 m aperture of the Subaru telescope is very powerful in detecting faint/small moving objects, including near-Earth objects, asteroids, centaurs and Tran-Neptunian objects (TNOs). However, the cadence and dithering pattern of the HSC-SSP are not designed for detecting moving objects, making it difficult to do so systematically. In this paper, we introduce a new pipeline for detecting moving objects (specifically TNOs) in a non-dedicated survey. The HSC-SSP catalogs are sliced into HEALPix partitions. Then, the stationary detections and false positives are removed with a machine-learning algorithm to produce a list of moving object candidates. An orbit linking algorithm and visual inspections are executed to generate the final list of detected TNOs. The preliminary results of a search for TNOs using this new pipeline on data from the first HSC-SSP data release (2014 March to 2015 November) present 231 TNO/Centaurs candidates. The bright candidates with Hr < 7.7 and i > 5 show that the best-fitting slope of a single power law to absolute magnitude distribution is 0.77. The g - r color distribution of hot HSC-SSP TNOs indicates a bluer peak at g - r = 0.9, which is consistent with the bluer peak of the bimodal color distribution in literature.
Research on measurement method of optical camouflage effect of moving object
NASA Astrophysics Data System (ADS)
Wang, Juntang; Xu, Weidong; Qu, Yang; Cui, Guangzhen
2016-10-01
Camouflage effectiveness measurement as an important part of the camouflage technology, which testing and measuring the camouflage effect of the target and the performance of the camouflage equipment according to the tactical and technical requirements. The camouflage effectiveness measurement of current optical band is mainly aimed at the static target which could not objectively reflect the dynamic camouflage effect of the moving target. This paper synthetical used technology of dynamic object detection and camouflage effect detection, the digital camouflage of the moving object as the research object, the adaptive background update algorithm of Surendra was improved, a method of optical camouflage effect detection using Lab-color space in the detection of moving-object was presented. The binary image of moving object is extracted by this measurement technology, in the sequence diagram, the characteristic parameters such as the degree of dispersion, eccentricity, complexity and moment invariants are constructed to construct the feature vector space. The Euclidean distance of moving target which through digital camouflage was calculated, the results show that the average Euclidean distance of 375 frames was 189.45, which indicated that the degree of dispersion, eccentricity, complexity and moment invariants of the digital camouflage graphics has a great difference with the moving target which not spray digital camouflage. The measurement results showed that the camouflage effect was good. Meanwhile with the performance evaluation module, the correlation coefficient of the dynamic target image range 0.1275 from 0.0035, and presented some ups and down. Under the dynamic condition, the adaptability of target and background was reflected. In view of the existing infrared camouflage technology, the next step, we want to carry out the camouflage effect measurement technology of the moving target based on infrared band.
Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus
NASA Astrophysics Data System (ADS)
Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.
2014-09-01
There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is used to assess the performance of the algorithm. In the second application, a visible imager operated in sidereal mode observes geostationary objects as moving, stars as fixed except for field rotation, and non-geostationary objects as drifting. RANSAC-MT is used to detect the drifter. In this set of data, the drifting space object was detected at a distance of 13800 km. The AFRL/RH set of data, collected in the stare mode, contained the signature of two geostationary satellites. The signature of a moving object was simulated and added to the sequence of frames to determine the sensitivity in magnitude. The performance compares well with the more intensive TBD algorithms reported in the literature.
Camouflage, detection and identification of moving targets
Hall, Joanna R.; Cuthill, Innes C.; Baddeley, Roland; Shohet, Adam J.; Scott-Samuel, Nicholas E.
2013-01-01
Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation—detection, identification and capture—in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely ‘break’ camouflage. PMID:23486439
Camouflage, detection and identification of moving targets.
Hall, Joanna R; Cuthill, Innes C; Baddeley, Roland; Shohet, Adam J; Scott-Samuel, Nicholas E
2013-05-07
Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation-detection, identification and capture-in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely 'break' camouflage.
NASA Astrophysics Data System (ADS)
Krauß, T.
2014-11-01
The focal plane assembly of most pushbroom scanner satellites is built up in a way that different multispectral or multispectral and panchromatic bands are not all acquired exactly at the same time. This effect is due to offsets of some millimeters of the CCD-lines in the focal plane. Exploiting this special configuration allows the detection of objects moving during this small time span. In this paper we present a method for automatic detection and extraction of moving objects - mainly traffic - from single very high resolution optical satellite imagery of different sensors. The sensors investigated are WorldView-2, RapidEye, Pléiades and also the new SkyBox satellites. Different sensors require different approaches for detecting moving objects. Since the objects are mapped on different positions only in different spectral bands also the change of spectral properties have to be taken into account. In case the main distance in the focal plane is between the multispectral and the panchromatic CCD-line like for Pléiades an approach for weighted integration to receive mostly identical images is investigated. Other approaches for RapidEye and WorldView-2 are also shown. From these intermediate bands difference images are calculated and a method for detecting the moving objects from these difference images is proposed. Based on these presented methods images from different sensors are processed and the results are assessed for detection quality - how many moving objects can be detected, how many are missed - and accuracy - how accurate is the derived speed and size of the objects. Finally the results are discussed and an outlook for possible improvements towards operational processing is presented.
Moving object localization using optical flow for pedestrian detection from a moving vehicle.
Hariyono, Joko; Hoang, Van-Dung; Jo, Kang-Hyun
2014-01-01
This paper presents a pedestrian detection method from a moving vehicle using optical flows and histogram of oriented gradients (HOG). A moving object is extracted from the relative motion by segmenting the region representing the same optical flows after compensating the egomotion of the camera. To obtain the optical flow, two consecutive images are divided into grid cells 14 × 14 pixels; then each cell is tracked in the current frame to find corresponding cell in the next frame. Using at least three corresponding cells, affine transformation is performed according to each corresponding cell in the consecutive images, so that conformed optical flows are extracted. The regions of moving object are detected as transformed objects, which are different from the previously registered background. Morphological process is applied to get the candidate human regions. In order to recognize the object, the HOG features are extracted on the candidate region and classified using linear support vector machine (SVM). The HOG feature vectors are used as input of linear SVM to classify the given input into pedestrian/nonpedestrian. The proposed method was tested in a moving vehicle and also confirmed through experiments using pedestrian dataset. It shows a significant improvement compared with original HOG using ETHZ pedestrian dataset.
Detecting and Analyzing Multiple Moving Objects in Crowded Environments with Coherent Motion Regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheriyadat, Anil M.
Understanding the world around us from large-scale video data requires vision systems that can perform automatic interpretation. While human eyes can unconsciously perceive independent objects in crowded scenes and other challenging operating environments, automated systems have difficulty detecting, counting, and understanding their behavior in similar scenes. Computer scientists at ORNL have a developed a technology termed as "Coherent Motion Region Detection" that invloves identifying multiple indepedent moving objects in crowded scenes by aggregating low-level motion cues extracted from moving objects. Humans and other species exploit such low-level motion cues seamlessely to perform perceptual grouping for visual understanding. The algorithm detectsmore » and tracks feature points on moving objects resulting in partial trajectories that span coherent 3D region in the space-time volume defined by the video. In the case of multi-object motion, many possible coherent motion regions can be constructed around the set of trajectories. The unique approach in the algorithm is to identify all possible coherent motion regions, then extract a subset of motion regions based on an innovative measure to automatically locate moving objects in crowded environments.The software reports snapshot of the object, count, and derived statistics ( count over time) from input video streams. The software can directly process videos streamed over the internet or directly from a hardware device (camera).« less
2009-12-01
facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . We use the fact that all the...a moving platform, we will have to naturally and effectively handle obvious motion parallax and object occlusions in order to be able to detect ...facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . Based on the above two
Real-time moving objects detection and tracking from airborne infrared camera
NASA Astrophysics Data System (ADS)
Zingoni, Andrea; Diani, Marco; Corsini, Giovanni
2017-10-01
Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its capability to work in real-time.
What are the underlying units of perceived animacy? Chasing detection is intrinsically object-based.
van Buren, Benjamin; Gao, Tao; Scholl, Brian J
2017-10-01
One of the most foundational questions that can be asked about any visual process is the nature of the underlying 'units' over which it operates (e.g., features, objects, or spatial regions). Here we address this question-for the first time, to our knowledge-in the context of the perception of animacy. Even simple geometric shapes appear animate when they move in certain ways. Do such percepts arise whenever any visual feature moves appropriately, or do they require that the relevant features first be individuated as discrete objects? Observers viewed displays in which one disc (the "wolf") chased another (the "sheep") among several moving distractor discs. Critically, two pairs of discs were also connected by visible lines. In the Unconnected condition, both lines connected pairs of distractors; but in the Connected condition, one connected the wolf to a distractor, and the other connected the sheep to a different distractor. Observers in the Connected condition were much less likely to describe such displays using mental state terms. Furthermore, signal detection analyses were used to explore the objective ability to discriminate chasing displays from inanimate control displays in which the wolf moved toward the sheep's mirror-image. Chasing detection was severely impaired on Connected trials: observers could readily detect an object chasing another object, but not a line-end chasing another line-end, a line-end chasing an object, or an object chasing a line-end. We conclude that the underlying units of perceived animacy are discrete visual objects.
NASA Astrophysics Data System (ADS)
Black, Christopher; McMichael, Ian; Riggs, Lloyd
2005-06-01
Electromagnetic induction (EMI) sensors and magnetometers have successfully detected surface laid, buried, and visually obscured metallic objects. Potential military activities could require detection of these objects at some distance from a moving vehicle in the presence of metallic clutter. Results show that existing EMI sensors have limited range capabilities and suffer from false alarms due to clutter. This paper presents results of an investigation of an EMI sensor designed for detecting large metallic objects on a moving platform in a high clutter environment. The sensor was developed by the U.S. Army RDECOM CERDEC NVESD in conjunction with the Johns Hopkins University Applied Physics Laboratory.
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
Algorithms for detection of objects in image sequences captured from an airborne imaging system
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak
1995-01-01
This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.
NASA Astrophysics Data System (ADS)
Gohatre, Umakant Bhaskar; Patil, Venkat P.
2018-04-01
In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.
Modeling peripheral vision for moving target search and detection.
Yang, Ji Hyun; Huston, Jesse; Day, Michael; Balogh, Imre
2012-06-01
Most target search and detection models focus on foveal vision. In reality, peripheral vision plays a significant role, especially in detecting moving objects. There were 23 subjects who participated in experiments simulating target detection tasks in urban and rural environments while their gaze parameters were tracked. Button responses associated with foveal object and peripheral object (PO) detection and recognition were recorded. In an urban scenario, pedestrians appearing in the periphery holding guns were threats and pedestrians with empty hands were non-threats. In a rural scenario, non-U.S. unmanned aerial vehicles (UAVs) were considered threats and U.S. UAVs non-threats. On average, subjects missed detecting 2.48 POs among 50 POs in the urban scenario and 5.39 POs in the rural scenario. Both saccade reaction time and button reaction time can be predicted by peripheral angle and entrance speed of POs. Fast moving objects were detected faster than slower objects and POs appearing at wider angles took longer to detect than those closer to the gaze center. A second-order mixed-effect model was applied to provide each subject's prediction model for peripheral target detection performance as a function of eccentricity angle and speed. About half the subjects used active search patterns while the other half used passive search patterns. An interactive 3-D visualization tool was developed to provide a representation of macro-scale head and gaze movement in the search and target detection task. An experimentally validated stochastic model of peripheral vision in realistic target detection scenarios was developed.
Moving object detection in top-view aerial videos improved by image stacking
NASA Astrophysics Data System (ADS)
Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen
2017-08-01
Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.
Modeling and query the uncertainty of network constrained moving objects based on RFID data
NASA Astrophysics Data System (ADS)
Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie
2007-06-01
The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.
ERIC Educational Resources Information Center
Gogate, Lakshmi J.; Bahrick, Lorraine E.
1998-01-01
Investigated 7-month olds' ability to relate vowel sounds with objects when intersensory redundancy was present versus absent. Found that infants detected a mismatch in the vowel-object pairs in the moving-synchronous condition but not in the still or moving-asynchronous condition, demonstrating that temporal synchrony between vocalizations and…
Calibration of asynchronous smart phone cameras from moving objects
NASA Astrophysics Data System (ADS)
Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel
2015-04-01
Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.
Space moving target detection using time domain feature
NASA Astrophysics Data System (ADS)
Wang, Min; Chen, Jin-yong; Gao, Feng; Zhao, Jin-yu
2018-01-01
The traditional space target detection methods mainly use the spatial characteristics of the star map to detect the targets, which can not make full use of the time domain information. This paper presents a new space moving target detection method based on time domain features. We firstly construct the time spectral data of star map, then analyze the time domain features of the main objects (target, stars and the background) in star maps, finally detect the moving targets using single pulse feature of the time domain signal. The real star map target detection experimental results show that the proposed method can effectively detect the trajectory of moving targets in the star map sequence, and the detection probability achieves 99% when the false alarm rate is about 8×10-5, which outperforms those of compared algorithms.
Distribution majorization of corner points by reinforcement learning for moving object detection
NASA Astrophysics Data System (ADS)
Wu, Hao; Yu, Hao; Zhou, Dongxiang; Cheng, Yongqiang
2018-04-01
Corner points play an important role in moving object detection, especially in the case of free-moving camera. Corner points provide more accurate information than other pixels and reduce the computation which is unnecessary. Previous works only use intensity information to locate the corner points, however, the information that former and the last frames provided also can be used. We utilize the information to focus on more valuable area and ignore the invaluable area. The proposed algorithm is based on reinforcement learning, which regards the detection of corner points as a Markov process. In the Markov model, the video to be detected is regarded as environment, the selections of blocks for one corner point are regarded as actions and the performance of detection is regarded as state. Corner points are assigned to be the blocks which are seperated from original whole image. Experimentally, we select a conventional method which uses marching and Random Sample Consensus algorithm to obtain objects as the main framework and utilize our algorithm to improve the result. The comparison between the conventional method and the same one with our algorithm show that our algorithm reduce 70% of the false detection.
A biological hierarchical model based underwater moving object detection.
Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen
2014-01-01
Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results.
A Biological Hierarchical Model Based Underwater Moving Object Detection
Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen
2014-01-01
Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results. PMID:25140194
Small Moving Vehicle Detection in a Satellite Video of an Urban Area
Yang, Tao; Wang, Xiwen; Yao, Bowei; Li, Jing; Zhang, Yanning; He, Zhannan; Duan, Wencheng
2016-01-01
Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels and exhibit low contrast with respect to the background, which makes it hard to get available appearance or shape information. In this paper, we look into the problem of moving vehicle detection in satellite imagery. To the best of our knowledge, it is the first time to deal with moving vehicle detection from satellite videos. Our approach consists of two stages: first, through foreground motion segmentation and trajectory accumulation, the scene motion heat map is dynamically built. Following this, a novel saliency based background model which intensifies moving objects is presented to segment the vehicles in the hot regions. Qualitative and quantitative experiments on sequence from a recent Skybox satellite video dataset demonstrates that our approach achieves a high detection rate and low false alarm simultaneously. PMID:27657091
Finding Kuiper Belt Objects Below the Detection Limit
NASA Astrophysics Data System (ADS)
Whidden, Peter; Kalmbach, Bryce; Bektesevic, Dino; Connolly, Andrew; Jones, Lynne; Smotherman, Hayden; Becker, Andrew
2018-01-01
We demonstrate a novel approach for uncovering the signatures of moving objects (e.g. Kuiper Belt Objects) below the detection thresholds of single astronomical images. To do so, we will employ a matched filter moving at specific rates of proposed orbits through a time-domain dataset. This is analogous to the better-known "shift-and-stack" method; however it uses neither direct shifting nor stacking of the image pixels. Instead of resampling the raw pixels to create an image stack, we will instead integrate the object detection probabilities across multiple single-epoch images to accrue support for a proposed orbit. The filtering kernel provides a measure of the probability that an object is present along a given orbit, and enables the user to make principled decisions about when the search has been successful, and when it may be terminated. The results we present here utilize GPUs to speed up the search by two orders of magnitudes over CPU implementations.
Improved segmentation of occluded and adjoining vehicles in traffic surveillance videos
NASA Astrophysics Data System (ADS)
Juneja, Medha; Grover, Priyanka
2013-12-01
Occlusion in image processing refers to concealment of any part of the object or the whole object from view of an observer. Real time videos captured by static cameras on roads often encounter overlapping and hence, occlusion of vehicles. Occlusion in traffic surveillance videos usually occurs when an object which is being tracked is hidden by another object. This makes it difficult for the object detection algorithms to distinguish all the vehicles efficiently. Also morphological operations tend to join the close proximity vehicles resulting in formation of a single bounding box around more than one vehicle. Such problems lead to errors in further video processing, like counting of vehicles in a video. The proposed system brings forward efficient moving object detection and tracking approach to reduce such errors. The paper uses successive frame subtraction technique for detection of moving objects. Further, this paper implements the watershed algorithm to segment the overlapped and adjoining vehicles. The segmentation results have been improved by the use of noise and morphological operations.
Method and apparatus for non-contact charge measurement
NASA Technical Reports Server (NTRS)
Wang, Taylor G. (Inventor); Lin, Kuan-Chan (Inventor); Hightower, James C. (Inventor)
1994-01-01
A method and apparatus for the accurate non-contact detection and measurement of static electric charge on an object using a reciprocating sensing probe that moves relative to the object. A monitor measures the signal generated as a result of this cyclical movement so as to detect the electrostatic charge on the object.
NASA Astrophysics Data System (ADS)
Zingoni, Andrea; Diani, Marco; Corsini, Giovanni
2016-10-01
We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.
Moving object detection via low-rank total variation regularization
NASA Astrophysics Data System (ADS)
Wang, Pengcheng; Chen, Qian; Shao, Na
2016-09-01
Moving object detection is a challenging task in video surveillance. Recently proposed Robust Principal Component Analysis (RPCA) can recover the outlier patterns from the low-rank data under some mild conditions. However, the l-penalty in RPCA doesn't work well in moving object detection because the irrepresentable condition is often not satisfied. In this paper, a method based on total variation (TV) regularization scheme is proposed. In our model, image sequences captured with a static camera are highly related, which can be described using a low-rank matrix. Meanwhile, the low-rank matrix can absorb background motion, e.g. periodic and random perturbation. The foreground objects in the sequence are usually sparsely distributed and drifting continuously, and can be treated as group outliers from the highly-related background scenes. Instead of l-penalty, we exploit the total variation of the foreground. By minimizing the total variation energy, the outliers tend to collapse and finally converge to be the exact moving objects. The TV-penalty is superior to the l-penalty especially when the outlier is in the majority for some pixels, and our method can estimate the outlier explicitly with less bias but higher variance. To solve the problem, a joint optimization function is formulated and can be effectively solved through the inexact Augmented Lagrange Multiplier (ALM) method. We evaluate our method along with several state-of-the-art approaches in MATLAB. Both qualitative and quantitative results demonstrate that our proposed method works effectively on a large range of complex scenarios.
Parallel Flux Tensor Analysis for Efficient Moving Object Detection
2011-07-01
computing as well as parallelization to enable real time performance in analyzing complex video [3, 4 ]. There are a number of challenging computer vision... 4 . TITLE AND SUBTITLE Parallel Flux Tensor Analysis for Efficient Moving Object Detection 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...We use the trace of the flux tensor matrix, referred to as Tr JF , that is defined below, Tr JF = ∫ Ω W (x− y)(I2xt(y) + I2yt(y) + I2tt(y))dy ( 4 ) as
Visual Sensitivities and Discriminations and Their Roles in Aviation.
1986-03-01
D. Low contrast letter charts in early diabetic retinopathy , octrlar hypertension, glaucoma and Parkinson’s disease. Br J Ophthalmol, 1984, 68, 885...to detect a camouflaged object that was visible only when moving, and compared these data with similar measurements for conventional objects that were...3) Compare visual detection (i.e. visual acquisition) of camouflaged objects whose edges are defined by velocity differences with visual detection
The Deep Lens Survey : Real--time Optical Transient and Moving Object Detection
NASA Astrophysics Data System (ADS)
Becker, Andy; Wittman, David; Stubbs, Chris; Dell'Antonio, Ian; Loomba, Dinesh; Schommer, Robert; Tyson, J. Anthony; Margoniner, Vera; DLS Collaboration
2001-12-01
We report on the real-time optical transient program of the Deep Lens Survey (DLS). Meeting the DLS core science weak-lensing objective requires repeated visits to the same part of the sky, 20 visits for 63 sub-fields in 4 filters, on a 4-m telescope. These data are reduced in real-time, and differenced against each other on all available timescales. Our observing strategy is optimized to allow sensitivity to transients on several minute, one day, one month, and one year timescales. The depth of the survey allows us to detect and classify both moving and stationary transients down to ~ 25th magnitude, a relatively unconstrained region of astronomical variability space. All transients and moving objects, including asteroids, Kuiper belt (or trans-Neptunian) objects, variable stars, supernovae, 'unknown' bursts with no apparent host, orphan gamma-ray burst afterglows, as well as airplanes, are posted on the web in real-time for use by the community. We emphasize our sensitivity to detect and respond in real-time to orphan afterglows of gamma-ray bursts, and present one candidate orphan in the field of Abell 1836. See http://dls.bell-labs.com/transients.html.
Moving vehicles segmentation based on Gaussian motion model
NASA Astrophysics Data System (ADS)
Zhang, Wei; Fang, Xiang Z.; Lin, Wei Y.
2005-07-01
Moving objects segmentation is a challenge in computer vision. This paper focuses on the segmentation of moving vehicles in dynamic scene. We analyses the psychology of human vision and present a framework for segmenting moving vehicles in the highway. The proposed framework consists of two parts. Firstly, we propose an adaptive background update method in which the background is updated according to the change of illumination conditions and thus can adapt to the change of illumination sensitively. Secondly, we construct a Gaussian motion model to segment moving vehicles, in which the motion vectors of the moving pixels are modeled as a Gaussian model and an on-line EM algorithm is used to update the model. The Gaussian distribution of the adaptive model is elevated to determine which moving vectors result from moving vehicles and which from other moving objects such as waving trees. Finally, the pixels with motion vector result from the moving vehicles are segmented. Experimental results of several typical scenes show that the proposed model can detect the moving vehicles correctly and is immune from influence of the moving objects caused by the waving trees and the vibration of camera.
Detecting method of subjects' 3D positions and experimental advanced camera control system
NASA Astrophysics Data System (ADS)
Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi
1997-04-01
Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.
NASA Astrophysics Data System (ADS)
Li, Shengbo Eben; Li, Guofa; Yu, Jiaying; Liu, Chang; Cheng, Bo; Wang, Jianqiang; Li, Keqiang
2018-01-01
Detection and tracking of objects in the side-near-field has attracted much attention for the development of advanced driver assistance systems. This paper presents a cost-effective approach to track moving objects around vehicles using linearly arrayed ultrasonic sensors. To understand the detection characteristics of a single sensor, an empirical detection model was developed considering the shapes and surface materials of various detected objects. Eight sensors were arrayed linearly to expand the detection range for further application in traffic environment recognition. Two types of tracking algorithms, including an Extended Kalman filter (EKF) and an Unscented Kalman filter (UKF), for the sensor array were designed for dynamic object tracking. The ultrasonic sensor array was designed to have two types of fire sequences: mutual firing or serial firing. The effectiveness of the designed algorithms were verified in two typical driving scenarios: passing intersections with traffic sign poles or street lights, and overtaking another vehicle. Experimental results showed that both EKF and UKF had more precise tracking position and smaller RMSE (root mean square error) than a traditional triangular positioning method. The effectiveness also encourages the application of cost-effective ultrasonic sensors in the near-field environment perception in autonomous driving systems.
Detecting multiple moving objects in crowded environments with coherent motion regions
Cheriyadat, Anil M.; Radke, Richard J.
2013-06-11
Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.
Study of moving object detecting and tracking algorithm for video surveillance system
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhang, Rongfu
2010-10-01
This paper describes a specific process of moving target detecting and tracking in the video surveillance.Obtain high-quality background is the key to achieving differential target detecting in the video surveillance.The paper is based on a block segmentation method to build clear background,and using the method of background difference to detecing moving target,after a series of treatment we can be extracted the more comprehensive object from original image,then using the smallest bounding rectangle to locate the object.In the video surveillance system, the delay of camera and other reasons lead to tracking lag,the model of Kalman filter based on template matching was proposed,using deduced and estimated capacity of Kalman,the center of smallest bounding rectangle for predictive value,predicted the position in the next moment may appare,followed by template matching in the region as the center of this position,by calculate the cross-correlation similarity of current image and reference image,can determine the best matching center.As narrowed the scope of searching,thereby reduced the searching time,so there be achieve fast-tracking.
Zhou, Fuqiang; Su, Zhen; Chai, Xinghua; Chen, Lipeng
2014-01-01
This paper proposes a new method to detect and identify foreign matter mixed in a plastic bottle filled with transfusion solution. A spin-stop mechanism and mixed illumination style are applied to obtain high contrast images between moving foreign matter and a static transfusion background. The Gaussian mixture model is used to model the complex background of the transfusion image and to extract moving objects. A set of features of moving objects are extracted and selected by the ReliefF algorithm, and optimal feature vectors are fed into the back propagation (BP) neural network to distinguish between foreign matter and bubbles. The mind evolutionary algorithm (MEA) is applied to optimize the connection weights and thresholds of the BP neural network to obtain a higher classification accuracy and faster convergence rate. Experimental results show that the proposed method can effectively detect visible foreign matter in 250-mL transfusion bottles. The misdetection rate and false alarm rate are low, and the detection accuracy and detection speed are satisfactory. PMID:25347581
Independent motion detection with a rival penalized adaptive particle filter
NASA Astrophysics Data System (ADS)
Becker, Stefan; Hübner, Wolfgang; Arens, Michael
2014-10-01
Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.
Jung, Jaehoon; Yoon, Inhye; Paik, Joonki
2016-01-01
This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978
Moving Object Detection Using Scanning Camera on a High-Precision Intelligent Holder.
Chen, Shuoyang; Xu, Tingfa; Li, Daqun; Zhang, Jizhou; Jiang, Shenwang
2016-10-21
During the process of moving object detection in an intelligent visual surveillance system, a scenario with complex background is sure to appear. The traditional methods, such as "frame difference" and "optical flow", may not able to deal with the problem very well. In such scenarios, we use a modified algorithm to do the background modeling work. In this paper, we use edge detection to get an edge difference image just to enhance the ability of resistance illumination variation. Then we use a "multi-block temporal-analyzing LBP (Local Binary Pattern)" algorithm to do the segmentation. In the end, a connected component is used to locate the object. We also produce a hardware platform, the core of which consists of the DSP (Digital Signal Processor) and FPGA (Field Programmable Gate Array) platforms and the high-precision intelligent holder.
A-Track: A New Approach for Detection of Moving Objects in FITS Images
NASA Astrophysics Data System (ADS)
Kılıç, Yücel; Karapınar, Nurdan; Atay, Tolga; Kaplan, Murat
2016-07-01
Small planet and asteroid observations are important for understanding the origin and evolution of the Solar System. In this work, we have developed a fast and robust pipeline, called A-Track, for detecting asteroids and comets in sequential telescope images. The moving objects are detected using a modified line detection algorithm, called ILDA. We have coded the pipeline in Python 3, where we have made use of various scientific modules in Python to process the FITS images. We tested the code on photometrical data taken by an SI-1100 CCD with a 1-meter telescope at TUBITAK National Observatory, Antalya. The pipeline can be used to analyze large data archives or daily sequential data. The code is hosted on GitHub under the GNU GPL v3 license.
An Automatic Technique for Finding Faint Moving Objects in Wide Field CCD Images
NASA Astrophysics Data System (ADS)
Hainaut, O. R.; Meech, K. J.
1996-09-01
The traditional method used to find moving objects in astronomical images is to blink pairs or series of frames after registering them to align the background objects. While this technique is extremely efficient in terms of the low signal-to-noise ratio that the human sight can detect, it proved to be extremely time-, brain- and eyesight-consuming. The wide-field images provided by the large CCD mosaic recently built at IfA cover a field of view of 20 to 30' over 8192(2) pixels. Blinking such images is an enormous task, comparable to that of blinking large photographic plates. However, as the data are available digitally (each image occupying 260Mb of disk space), we are developing a set of computer codes to perform the moving object identification in sets of frames. This poster will describe the techniques we use in order to reach a detection efficiency as good as that of a human blinker; the main steps are to find all the objects in each frame (for which we rely on ``S-Extractor'' (Bertin & Arnouts (1996), A&ASS 117, 393), then identify all the background objects, and finally to search the non-background objects for sources moving in a coherent fashion. We will also describe the results of this method applied to actual data from the 8k CCD mosaic. {This work is being supported, in part, by NSF grant AST 92-21318.}
Command Wire Sensor Measurements
2012-09-01
coupled with the extreme harsh terrain has meant that few of these techniques have proved robust enough when moved from the laboratory to the field...to image stationary objects and does not accurately image moving targets. Moving targets can be seriously distorted and displaced from their true...battlefield and for imaging of fixed targets. Moving targets can be detected with a SAR if they have a Doppler frequency shift greater than the
Real-time Human Activity Recognition
NASA Astrophysics Data System (ADS)
Albukhary, N.; Mustafah, Y. M.
2017-11-01
The traditional Closed-circuit Television (CCTV) system requires human to monitor the CCTV for 24/7 which is inefficient and costly. Therefore, there’s a need for a system which can recognize human activity effectively in real-time. This paper concentrates on recognizing simple activity such as walking, running, sitting, standing and landing by using image processing techniques. Firstly, object detection is done by using background subtraction to detect moving object. Then, object tracking and object classification are constructed so that different person can be differentiated by using feature detection. Geometrical attributes of tracked object, which are centroid and aspect ratio of identified tracked are manipulated so that simple activity can be detected.
Moving Object Detection Using Scanning Camera on a High-Precision Intelligent Holder
Chen, Shuoyang; Xu, Tingfa; Li, Daqun; Zhang, Jizhou; Jiang, Shenwang
2016-01-01
During the process of moving object detection in an intelligent visual surveillance system, a scenario with complex background is sure to appear. The traditional methods, such as “frame difference” and “optical flow”, may not able to deal with the problem very well. In such scenarios, we use a modified algorithm to do the background modeling work. In this paper, we use edge detection to get an edge difference image just to enhance the ability of resistance illumination variation. Then we use a “multi-block temporal-analyzing LBP (Local Binary Pattern)” algorithm to do the segmentation. In the end, a connected component is used to locate the object. We also produce a hardware platform, the core of which consists of the DSP (Digital Signal Processor) and FPGA (Field Programmable Gate Array) platforms and the high-precision intelligent holder. PMID:27775671
Detection and 3d Modelling of Vehicles from Terrestrial Stereo Image Pairs
NASA Astrophysics Data System (ADS)
Coenen, M.; Rottensteiner, F.; Heipke, C.
2017-05-01
The detection and pose estimation of vehicles plays an important role for automated and autonomous moving objects e.g. in autonomous driving environments. We tackle that problem on the basis of street level stereo images, obtained from a moving vehicle. Processing every stereo pair individually, our approach is divided into two subsequent steps: the vehicle detection and the modelling step. For the detection, we make use of the 3D stereo information and incorporate geometric assumptions on vehicle inherent properties in a firstly applied generic 3D object detection. By combining our generic detection approach with a state of the art vehicle detector, we are able to achieve satisfying detection results with values for completeness and correctness up to more than 86%. By fitting an object specific vehicle model into the vehicle detections, we are able to reconstruct the vehicles in 3D and to derive pose estimations as well as shape parameters for each vehicle. To deal with the intra-class variability of vehicles, we make use of a deformable 3D active shape model learned from 3D CAD vehicle data in our model fitting approach. While we achieve encouraging values up to 67.2% for correct position estimations, we are facing larger problems concerning the orientation estimation. The evaluation is done by using the object detection and orientation estimation benchmark of the KITTI dataset (Geiger et al., 2012).
Binding Objects to Locations: The Relationship between Object Files and Visual Working Memory
ERIC Educational Resources Information Center
Hollingworth, Andrew; Rasmussen, Ian P.
2010-01-01
The relationship between object files and visual working memory (VWM) was investigated in a new paradigm combining features of traditional VWM experiments (color change detection) and object-file experiments (memory for the properties of moving objects). Object-file theory was found to account for a key component of object-position binding in VWM:…
Come together, right now: dynamic overwriting of an object's history through common fate.
Luria, Roy; Vogel, Edward K
2014-08-01
The objects around us constantly move and interact, and the perceptual system needs to monitor on-line these interactions and to update the object's status accordingly. Gestalt grouping principles, such as proximity and common fate, play a fundamental role in how we perceive and group these objects. Here, we investigated situations in which the initial object representation as a separate item was updated by a subsequent Gestalt grouping cue (i.e., proximity or common fate). We used a version of the color change detection paradigm, in which the objects started to move separately, then met and stayed stationary, or moved separately, met, and then continued to move together. We monitored the object representations on-line using the contralateral delay activity (CDA; an ERP component indicative of the number of maintained objects), during their movement, and after the objects disappeared and became working memory representations. The results demonstrated that the objects' representations (as indicated by the CDA amplitude) persisted as being separate, even after a Gestalt proximity cue (when the objects "met" and remained stationary on the same position). Only a strong common fate Gestalt cue (when the objects not just met but also moved together) was able to override the objects' initial separate status, creating an integrated representation. These results challenge the view that Gestalt principles cause reflexive grouping. Instead, the object initial representation plays an important role that can override even powerful grouping cues.
Inattentional blindness is influenced by exposure time not motion speed.
Kreitz, Carina; Furley, Philip; Memmert, Daniel
2016-01-01
Inattentional blindness is a striking phenomenon in which a salient object within the visual field goes unnoticed because it is unexpected, and attention is focused elsewhere. Several attributes of the unexpected object, such as size and animacy, have been shown to influence the probability of inattentional blindness. At present it is unclear whether or how the speed of a moving unexpected object influences inattentional blindness. We demonstrated that inattentional blindness rates are considerably lower if the unexpected object moves more slowly, suggesting that it is the mere exposure time of the object rather than a higher saliency potentially induced by higher speed that determines the likelihood of its detection. Alternative explanations could be ruled out: The effect is not based on a pop-out effect arising from different motion speeds in relation to the primary-task stimuli (Experiment 2), nor is it based on a higher saliency of slow-moving unexpected objects (Experiment 3).
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
Two applications of time reversal mirrors: seismic radio and seismic radar.
Hanafy, Sherif M; Schuster, Gerard T
2011-10-01
Two seismic applications of time reversal mirrors (TRMs) are introduced and tested with field experiments. The first one is sending, receiving, and decoding coded messages similar to a radio except seismic waves are used. The second one is, similar to radar surveillance, detecting and tracking a moving object(s) in a remote area, including the determination of the objects speed of movement. Both applications require the prior recording of calibration Green's functions in the area of interest. This reference Green's function will be used as a codebook to decrypt the coded message in the first application and as a moving sensor for the second application. Field tests show that seismic radar can detect the moving coordinates (x(t), y(t), z(t)) of a person running through a calibration site. This information also allows for a calculation of his velocity as a function of location. Results with the seismic radio are successful in seismically detecting and decoding coded pulses produced by a hammer. Both seismic radio and radar are highly robust to signals in high noise environments due to the super-stacking property of TRMs. © 2011 Acoustical Society of America
ERIC Educational Resources Information Center
Flombaum, Jonathan I.; Scholl, Brian J.
2006-01-01
Meaningful visual experience requires computations that identify objects as the same persisting individuals over time, motion, occlusion, and featural change. This article explores these computations in the tunnel effect: When an object moves behind an occluder, and then an object later emerges following a consistent trajectory, observers…
Object tracking via background subtraction for monitoring illegal activity in crossroad
NASA Astrophysics Data System (ADS)
Ghimire, Deepak; Jeong, Sunghwan; Park, Sang Hyun; Lee, Joonwhoan
2016-07-01
In the field of intelligent transportation system a great number of vision-based techniques have been proposed to prevent pedestrians from being hit by vehicles. This paper presents a system that can perform pedestrian and vehicle detection and monitoring of illegal activity in zebra crossings. In zebra crossing, according to the traffic light status, to fully avoid a collision, a driver or pedestrian should be warned earlier if they possess any illegal moves. In this research, at first, we detect the traffic light status of pedestrian and monitor the crossroad for vehicle pedestrian moves. The background subtraction based object detection and tracking is performed to detect pedestrian and vehicles in crossroads. Shadow removal, blob segmentation, trajectory analysis etc. are used to improve the object detection and classification performance. We demonstrate the experiment in several video sequences which are recorded in different time and environment such as day time and night time, sunny and raining environment. Our experimental results show that such simple and efficient technique can be used successfully as a traffic surveillance system to prevent accidents in zebra crossings.
Image analysis of multiple moving wood pieces in real time
NASA Astrophysics Data System (ADS)
Wang, Weixing
2006-02-01
This paper presents algorithms for image processing and image analysis of wood piece materials. The algorithms were designed for auto-detection of wood piece materials on a moving conveyor belt or a truck. When wood objects on moving, the hard task is to trace the contours of the objects in n optimal way. To make the algorithms work efficiently in the plant, a flexible online system was designed and developed, which mainly consists of image acquisition, image processing, object delineation and analysis. A number of newly-developed algorithms can delineate wood objects with high accuracy and high speed, and in the wood piece analysis part, each wood piece can be characterized by a number of visual parameters which can also be used for constructing experimental models directly in the system.
Multiple targets detection method in detection of UWB through-wall radar
NASA Astrophysics Data System (ADS)
Yang, Xiuwei; Yang, Chuanfa; Zhao, Xingwen; Tian, Xianzhong
2017-11-01
In this paper, the problems and difficulties encountered in the detection of multiple moving targets by UWB radar are analyzed. The experimental environment and the penetrating radar system are established. An adaptive threshold method based on local area is proposed to effectively filter out clutter interference The objective of the moving target is analyzed, and the false target is further filtered out by extracting the target feature. Based on the correlation between the targets, the target matching algorithm is proposed to improve the detection accuracy. Finally, the effectiveness of the above method is verified by practical experiment.
An analog retina model for detecting dim moving objects against a bright moving background
NASA Technical Reports Server (NTRS)
Searfus, R. M.; Colvin, M. E.; Eeckman, F. H.; Teeters, J. L.; Axelrod, T. S.
1991-01-01
We are interested in applications that require the ability to track a dim target against a bright, moving background. Since the target signal will be less than or comparable to the variations in the background signal intensity, sophisticated techniques must be employed to detect the target. We present an analog retina model that adapts to the motion of the background in order to enhance targets that have a velocity difference with respect to the background. Computer simulation results and our preliminary concept of an analog 'Z' focal plane implementation are also presented.
Integration across Time Determines Path Deviation Discrimination for Moving Objects
Whitaker, David; Levi, Dennis M.; Kennedy, Graeme J.
2008-01-01
Background Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects–a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat. Methodology/Principal Findings Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a ‘scale invariant’ model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds. Conclusions/Significance Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects. PMID:18414653
Using advanced computer vision algorithms on small mobile robots
NASA Astrophysics Data System (ADS)
Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.
2006-05-01
The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.
Radar Detection of Marine Mammals
2010-09-30
associative tracker using the Munkres algorithm was used. This was then expanded to include a track - before - detect algorithm, the Baysean Field...small, slow moving objects (i.e. whales). In order to address the third concern (M2 mode), we have tested using a track - before - detect tracker termed
Come Together, Right Now: Dynamic Overwriting of an Object’s History through Common Fate
Luria, Roy; Vogel, Edward K.
2015-01-01
The objects around us constantly move and interact, and the perceptual system needs to monitor on-line these interactions and to update the object’s status accordingly. Gestalt grouping principles, such as proximity and common fate, play a fundamental role in how we perceive and group these objects. Here, we investigated situations in which the initial object representation as a separate item was updated by a subsequent Gestalt grouping cue (i.e., proximity or common fate). We used a version of the color change detection paradigm, in which the objects started to move separately, then met and stayed stationary, or moved separately, met, and then continued to move together. We monitored the object representations on-line using the contralateral delay activity (CDA; an ERP component indicative of the number of maintained objects), during their movement, and after the objects disappeared and became working memory representations. The results demonstrated that the objects’ representations (as indicated by the CDA amplitude) persisted as being separate, even after a Gestalt proximity cue (when the objects “met” and remained stationary on the same position). Only a strong common fate Gestalt cue (when the objects not just met but also moved together) was able to override the objects’ initial separate status, creating an integrated representation. These results challenge the view that Gestalt principles cause reflexive grouping. Instead, the object initial representation plays an important role that can override even powerful grouping cues. PMID:24564468
Simultaneous Detection and Tracking of Pedestrian from Panoramic Laser Scanning Data
NASA Astrophysics Data System (ADS)
Xiao, Wen; Vallet, Bruno; Schindler, Konrad; Paparoditis, Nicolas
2016-06-01
Pedestrian traffic flow estimation is essential for public place design and construction planning. Traditional data collection by human investigation is tedious, inefficient and expensive. Panoramic laser scanners, e.g. Velodyne HDL-64E, which scan surroundings repetitively at a high frequency, have been increasingly used for 3D object tracking. In this paper, a simultaneous detection and tracking (SDAT) method is proposed for precise and automatic pedestrian trajectory recovery. First, the dynamic environment is detected using two different methods, Nearest-point and Max-distance. Then, all the points on moving objects are transferred into a space-time (x, y, t) coordinate system. The pedestrian detection and tracking amounts to assign the points belonging to pedestrians into continuous trajectories in space-time. We formulate the point assignment task as an energy function which incorporates the point evidence, trajectory number, pedestrian shape and motion. A low energy trajectory will well explain the point observations, and have plausible trajectory trend and length. The method inherently filters out points from other moving objects and false detections. The energy function is solved by a two-step optimization process: tracklet detection in a short temporal window; and global tracklet association through the whole time span. Results demonstrate that the proposed method can automatically recover the pedestrians trajectories with accurate positions and low false detections and mismatches.
Reference Directions and Reference Objects in Spatial Memory of a Briefly Viewed Layout
ERIC Educational Resources Information Center
Mou, Weimin; Xiao, Chengli; McNamara, Timothy P.
2008-01-01
Two experiments investigated participants' spatial memory of a briefly viewed layout. Participants saw an array of five objects on a table and, after a short delay, indicated whether the target object indicated by the experimenter had been moved. Experiment 1 showed that change detection was more accurate when non-target objects were stationary…
Distributed proximity sensor system having embedded light emitters and detectors
NASA Technical Reports Server (NTRS)
Lee, Sukhan (Inventor)
1990-01-01
A distributed proximity sensor system is provided with multiple photosensitive devices and light emitters embedded on the surface of a robot hand or other moving member in a geometric pattern. By distributing sensors and emitters capable of detecting distances and angles to points on the surface of an object from known points in the geometric pattern, information is obtained for achieving noncontacting shape and distance perception, i.e., for automatic determination of the object's shape, direction and distance, as well as the orientation of the object relative to the robot hand or other moving member.
Evaluation of Moving Object Detection Based on Various Input Noise Using Fixed Camera
NASA Astrophysics Data System (ADS)
Kiaee, N.; Hashemizadeh, E.; Zarrinpanjeh, N.
2017-09-01
Detecting and tracking objects in video has been as a research area of interest in the field of image processing and computer vision. This paper evaluates the performance of a novel method for object detection algorithm in video sequences. This process helps us to know the advantage of this method which is being used. The proposed framework compares the correct and wrong detection percentage of this algorithm. This method was evaluated with the collected data in the field of urban transport which include car and pedestrian in fixed camera situation. The results show that the accuracy of the algorithm will decreases because of image resolution reduction.
Sensor modeling and demonstration of a multi-object spectrometer for performance-driven sensing
NASA Astrophysics Data System (ADS)
Kerekes, John P.; Presnar, Michael D.; Fourspring, Kenneth D.; Ninkov, Zoran; Pogorzala, David R.; Raisanen, Alan D.; Rice, Andrew C.; Vasquez, Juan R.; Patel, Jeffrey P.; MacIntyre, Robert T.; Brown, Scott D.
2009-05-01
A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a highfidelity, hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track moving vehicle targets.
Detection of Moving Targets Using Soliton Resonance Effect
NASA Technical Reports Server (NTRS)
Kulikov, Igor K.; Zak, Michail
2013-01-01
The objective of this research was to develop a fundamentally new method for detecting hidden moving targets within noisy and cluttered data-streams using a novel "soliton resonance" effect in nonlinear dynamical systems. The technique uses an inhomogeneous Korteweg de Vries (KdV) equation containing moving-target information. Solution of the KdV equation will describe a soliton propagating with the same kinematic characteristics as the target. The approach uses the time-dependent data stream obtained with a sensor in form of the "forcing function," which is incorporated in an inhomogeneous KdV equation. When a hidden moving target (which in many ways resembles a soliton) encounters the natural "probe" soliton solution of the KdV equation, a strong resonance phenomenon results that makes the location and motion of the target apparent. Soliton resonance method will amplify the moving target signal, suppressing the noise. The method will be a very effective tool for locating and identifying diverse, highly dynamic targets with ill-defined characteristics in a noisy environment. The soliton resonance method for the detection of moving targets was developed in one and two dimensions. Computer simulations proved that the method could be used for detection of singe point-like targets moving with constant velocities and accelerations in 1D and along straight lines or curved trajectories in 2D. The method also allows estimation of the kinematic characteristics of moving targets, and reconstruction of target trajectories in 2D. The method could be very effective for target detection in the presence of clutter and for the case of target obscurations.
Robust skin color-based moving object detection for video surveillance
NASA Astrophysics Data System (ADS)
Kaliraj, Kalirajan; Manimaran, Sudha
2016-07-01
Robust skin color-based moving object detection for video surveillance is proposed. The objective of the proposed algorithm is to detect and track the target under complex situations. The proposed framework comprises four stages, which include preprocessing, skin color-based feature detection, feature classification, and target localization and tracking. In the preprocessing stage, the input image frame is smoothed using averaging filter and transformed into YCrCb color space. In skin color detection, skin color regions are detected using Otsu's method of global thresholding. In the feature classification, histograms of both skin and nonskin regions are constructed and the features are classified into foregrounds and backgrounds based on Bayesian skin color classifier. The foreground skin regions are localized by a connected component labeling process. Finally, the localized foreground skin regions are confirmed as a target by verifying the region properties, and nontarget regions are rejected using the Euler method. At last, the target is tracked by enclosing the bounding box around the target region in all video frames. The experiment was conducted on various publicly available data sets and the performance was evaluated with baseline methods. It evidently shows that the proposed algorithm works well against slowly varying illumination, target rotations, scaling, fast, and abrupt motion changes.
Feasibility of Flaw Detection in Railroad Wheels Using Acoustic Signatures
DOT National Transportation Integrated Search
1976-10-01
The feasibility study on the use of acoustic signatures for detection of flaws in railway wheels was conducted with the ultimate objective of development of an intrack device for moving cars. Determinations of the natural modes of vibrating wheels un...
Perceiving environmental structure from optical motion
NASA Technical Reports Server (NTRS)
Lappin, Joseph S.
1991-01-01
Generally speaking, one of the most important sources of optical information about environmental structure is known to be the deforming optical patterns produced by the movements of the observer (pilot) or environmental objects. As an observer moves through a rigid environment, the projected optical patterns of environmental objects are systematically transformed according to their orientations and positions in 3D space relative to those of the observer. The detailed characteristics of these deforming optical patterns carry information about the 3D structure of the objects and about their locations and orientations relative to those of the observer. The specific geometrical properties of moving images that may constitute visually detected information about the shapes and locations of environmental objects is examined.
A system for learning statistical motion patterns.
Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve
2006-09-01
Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.
Security Event Recognition for Visual Surveillance
NASA Astrophysics Data System (ADS)
Liao, W.; Yang, C.; Yang, M. Ying; Rosenhahn, B.
2017-05-01
With rapidly increasing deployment of surveillance cameras, the reliable methods for automatically analyzing the surveillance video and recognizing special events are demanded by different practical applications. This paper proposes a novel effective framework for security event analysis in surveillance videos. First, convolutional neural network (CNN) framework is used to detect objects of interest in the given videos. Second, the owners of the objects are recognized and monitored in real-time as well. If anyone moves any object, this person will be verified whether he/she is its owner. If not, this event will be further analyzed and distinguished between two different scenes: moving the object away or stealing it. To validate the proposed approach, a new video dataset consisting of various scenarios is constructed for more complex tasks. For comparison purpose, the experiments are also carried out on the benchmark databases related to the task on abandoned luggage detection. The experimental results show that the proposed approach outperforms the state-of-the-art methods and effective in recognizing complex security events.
Effects of radial direction and eccentricity on acceleration perception.
Mueller, Alexandra S; Timney, Brian
2014-01-01
Radial optic flow can elicit impressions of self-motion--vection--or of objects moving relative to the observer, but there is disagreement as to whether humans have greater sensitivity to expanding or to contracting optic flow. Although most studies agree there is an anisotropy in sensitivity to radial optic flow, it is unclear whether this asymmetry is a function of eccentricity. The issue is further complicated by the fact that few studies have examined how acceleration sensitivity is affected, even though observers and objects in the environment seldom move at a constant speed. To address these issues, we investigated the effects of direction and eccentricity on the ability to detect acceleration in radial optic flow. Our results indicate that observers are better at detecting acceleration when viewing contraction compared with expansion and that eccentricity has no effect on the ability to detect accelerating radial optic flow. Ecological interpretations are discussed.
Line grouping using perceptual saliency and structure prediction for car detection in traffic scenes
NASA Astrophysics Data System (ADS)
Denasi, Sandra; Quaglia, Giorgio
1993-08-01
Autonomous and guide assisted vehicles make a heavy use of computer vision techniques to perceive the environment where they move. In this context, the European PROMETHEUS program is carrying on activities in order to develop autonomous vehicle monitoring that assists people to achieve safer driving. Car detection is one of the topics that are faced by the program. Our contribution proposes the development of this task in two stages: the localization of areas of interest and the formulation of object hypotheses. In particular, the present paper proposes a new approach that builds structural descriptions of objects from edge segmentations by using geometrical organization. This approach has been applied to the detection of cars in traffic scenes. We have analyzed images taken from a moving vehicle in order to formulate obstacle hypotheses: preliminary results confirm the efficiency of the method.
Solution to the SLAM problem in low dynamic environments using a pose graph and an RGB-D sensor.
Lee, Donghwa; Myung, Hyun
2014-07-11
In this study, we propose a solution to the simultaneous localization and mapping (SLAM) problem in low dynamic environments by using a pose graph and an RGB-D (red-green-blue depth) sensor. The low dynamic environments refer to situations in which the positions of objects change over long intervals. Therefore, in the low dynamic environments, robots have difficulty recognizing the repositioning of objects unlike in highly dynamic environments in which relatively fast-moving objects can be detected using a variety of moving object detection algorithms. The changes in the environments then cause groups of false loop closing when the same moved objects are observed for a while, which means that conventional SLAM algorithms produce incorrect results. To address this problem, we propose a novel SLAM method that handles low dynamic environments. The proposed method uses a pose graph structure and an RGB-D sensor. First, to prune the falsely grouped constraints efficiently, nodes of the graph, that represent robot poses, are grouped according to the grouping rules with noise covariances. Next, false constraints of the pose graph are pruned according to an error metric based on the grouped nodes. The pose graph structure is reoptimized after eliminating the false information, and the corrected localization and mapping results are obtained. The performance of the method was validated in real experiments using a mobile robot system.
Feghali, Rosario; Mitiche, Amar
2004-11-01
The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.
Interplanetary Dust Observations by the Juno MAG Investigation
NASA Astrophysics Data System (ADS)
Jørgensen, John; Benn, Mathias; Denver, Troelz; Connerney, Jack; Jørgensen, Peter; Bolton, Scott; Brauer, Peter; Levin, Steven; Oliversen, Ronald
2017-04-01
The spin-stabilized and solar powered Juno spacecraft recently concluded a 5-year voyage through the solar system en route to Jupiter, arriving on July 4th, 2016. During the cruise phase from Earth to the Jovian system, the Magnetometer investigation (MAG) operated two magnetic field sensors and four co-located imaging systems designed to provide accurate attitude knowledge for the MAG sensors. One of these four imaging sensors - camera "D" of the Advanced Stellar Compass (ASC) - was operated in a mode designed to detect all luminous objects in its field of view, recording and characterizing those not found in the on-board star catalog. The capability to detect and track such objects ("non-stellar objects", or NSOs) provides a unique opportunity to sense and characterize interplanetary dust particles. The camera's detection threshold was set to MV9 to minimize false detections and discourage tracking of known objects. On-board filtering algorithms selected only those objects tracked through more than 5 consecutive images and moving with an apparent angular rate between 15"/s and 10,000"/s. The coordinates (RA, DEC), intensity, and apparent velocity of such objects were stored for eventual downlink. Direct detection of proximate dust particles is precluded by their large (10-30 km/s) relative velocity and extreme angular rates, but their presence may be inferred using the collecting area of Juno's large ( 55m2) solar arrays. Dust particles impact the spacecraft at high velocity, creating an expanding plasma cloud and ejecta with modest (few m/s) velocities. These excavated particles are revealed in reflected sunlight and tracked moving away from the spacecraft from the point of impact. Application of this novel detection method during Juno's traversal of the solar system provides new information on the distribution of interplanetary (µm-sized) dust.
Figure-ground segregation by motion contrast and by luminance contrast.
Regan, D; Beverley, K I
1984-05-01
Some naturally camouflaged objects are invisible unless they move; their boundaries are then defined by motion contrast between object and background. We compared the visual detection of such camouflaged objects with the detection of objects whose boundaries were defined by luminance contrast. The summation field area is 0.16 deg2 , and the summation time constant is 750 msec for parafoveally viewed objects whose boundaries are defined by motion contrast; these values are, respectively, about 5 and 12 times larger than the corresponding values for objects defined by luminance contrast. The log detection threshold is proportional to the eccentricity for a camouflaged object of constant area. The effect of eccentricity on threshold is less for large objects than for small objects. The log summation field diameter for detecting camouflaged objects is roughly proportional to the eccentricity, increasing to about 20 deg at 32-deg eccentricity. In contrast to the 100:1 increase of summation area for detecting camouflaged objects, the temporal summation time constant changes by only 40% between eccentricities of 0 and 16 deg.
Zielinski, Ingar Marie; Steenbergen, Bert; Schmidt, Anna; Klingels, Katrijn; Simon Martinez, Cristina; de Water, Pascal; Hoare, Brian
2018-03-23
To introduce the Windmill-task, a new objective assessment tool to quantify the presence of mirror movements (MMs) in children with unilateral cerebral palsy (UCP), which are typically assessed with the observation-based Woods and Teuber scale (W&T). Prospective, observational, cohort pilot study. Children's hospital. Prospective cohort of children (N=23) with UCP (age range, 6-15y, mean age, 10.5±2.7y). Not applicable. The concurrent validity of the Windmill-task is assessed, and the sensitivity and specificity for MM detection are compared between both assessments. To assess the concurrent validity, Windmill-task data are compared with W&T data using Spearman rank correlations (ρ) for 2 conditions: affected hand moving vs less affected hand moving. Sensitivity and specificity are compared by measuring the mean percentage of children being assessed inconsistently across both assessments. Outcomes of both assessments correlated significantly (affected hand moving: ρ=.520; P=.005; less affected hand moving: ρ=.488; P=.009). However, many children displayed MMs on the Windmill-task, but not on the W&T (sensitivity: affected hand moving: 27.5%; less affected hand moving: 40.6%). Only 2 children displayed MMs on the W&T, but not on the Windmill-task (specificity: affected hand moving: 2.9%; less affected hand moving: 1.4%). The Windmill-task seems to be a valid tool to assess MMs in children with UCP and has an additional advantage of sensitivity to detect MMs. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
ALLFlight: detection of moving objects in IR and ladar images
NASA Astrophysics Data System (ADS)
Doehler, H.-U.; Peinecke, Niklas; Lueken, Thomas; Schmerwitz, Sven
2013-05-01
Supporting a helicopter pilot during landing and takeoff in degraded visual environment (DVE) is one of the challenges within DLR's project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites). Different types of sensors (TV, Infrared, mmW radar and laser radar) are mounted onto DLR's research helicopter FHS (flying helicopter simulator) for gathering different sensor data of the surrounding world. A high performance computer cluster architecture acquires and fuses all the information to get one single comprehensive description of the outside situation. While both TV and IR cameras deliver images with frame rates of 25 Hz or 30 Hz, Ladar and mmW radar provide georeferenced sensor data with only 2 Hz or even less. Therefore, it takes several seconds to detect or even track potential moving obstacle candidates in mmW or Ladar sequences. Especially if the helicopter is flying with higher speed, it is very important to minimize the detection time of obstacles in order to initiate a re-planning of the helicopter's mission timely. Applying feature extraction algorithms on IR images in combination with data fusion algorithms of extracted features and Ladar data can decrease the detection time appreciably. Based on real data from flight tests, the paper describes applied feature extraction methods for moving object detection, as well as data fusion techniques for combining features from TV/IR and Ladar data.
Beanland, Vanessa; Filtness, Ashleigh J; Jeans, Rhiannon
2017-03-01
The ability to detect changes is crucial for safe driving. Previous research has demonstrated that drivers often experience change blindness, which refers to failed or delayed change detection. The current study explored how susceptibility to change blindness varies as a function of the driving environment, type of object changed, and safety relevance of the change. Twenty-six fully-licenced drivers completed a driving-related change detection task. Changes occurred to seven target objects (road signs, cars, motorcycles, traffic lights, pedestrians, animals, or roadside trees) across two environments (urban or rural). The contextual safety relevance of the change was systematically manipulated within each object category, ranging from high safety relevance (i.e., requiring a response by the driver) to low safety relevance (i.e., requiring no response). When viewing rural scenes, compared with urban scenes, participants were significantly faster and more accurate at detecting changes, and were less susceptible to "looked-but-failed-to-see" errors. Interestingly, safety relevance of the change differentially affected performance in urban and rural environments. In urban scenes, participants were more efficient at detecting changes with higher safety relevance, whereas in rural scenes the effect of safety relevance has marginal to no effect on change detection. Finally, even after accounting for safety relevance, change blindness varied significantly between target types. Overall the results suggest that drivers are less susceptible to change blindness for objects that are likely to change or move (e.g., traffic lights vs. road signs), and for moving objects that pose greater danger (e.g., wild animals vs. pedestrians). Copyright © 2017 Elsevier Ltd. All rights reserved.
Observations of interplanetary dust by the Juno magnetometer investigation
NASA Astrophysics Data System (ADS)
Benn, M.; Jorgensen, J. L.; Denver, T.; Brauer, P.; Jorgensen, P. S.; Andersen, A. C.; Connerney, J. E. P.; Oliversen, R.; Bolton, S. J.; Levin, S.
2017-05-01
One of the Juno magnetometer investigation's star cameras was configured to search for unidentified objects during Juno's transit en route to Jupiter. This camera detects and registers luminous objects to magnitude 8. Objects persisting in more than five consecutive images and moving with an apparent angular rate of between 2 and 18,000 arcsec/s were recorded. Among the objects detected were a small group of objects tracked briefly in close proximity to the spacecraft. The trajectory of these objects demonstrates that they originated on the Juno spacecraft, evidently excavated by micrometeoroid impacts on the solar arrays. The majority of detections occurred just prior to and shortly after Juno's transit of the asteroid belt. This rather novel detection technique utilizes the Juno spacecraft's prodigious 60 m2 of solar array as a dust detector and provides valuable information on the distribution and motion of interplanetary (>μm sized) dust.
Detection and imaging of moving objects with SAR by a joint space-time-frequency processing
NASA Astrophysics Data System (ADS)
Barbarossa, Sergio; Farina, Alfonso
This paper proposes a joint spacetime-frequency processing scheme for the detection and imaging of moving targets by Synthetic Aperture Radars (SAR). The method is based on the availability of an array antenna. The signals received by the array elements are combined, in a spacetime processor, to cancel the clutter. Then, they are analyzed in the time-frequency domain, by computing their Wigner-Ville Distribution (WVD), in order to estimate the instantaneous frequency, to be used for the successive phase compensation, necessary to produce a high resolution image.
Binary Detection using Multi-Hypothesis Log-Likelihood, Image Processing
2014-03-27
geosynchronous orbit and other scenarios important to the USAF. 2 1.3 Research objectives The question posed in this thesis is how well, if at all, can a...is important to compare them to another modern technique. The third objective is to compare results from another image detection method, specifically...Although adaptive optics is an important technique in moving closer to diffraction limited imaging, it is not currently a practical solution for all
Using a CO2 laser for PIR-detector spoofing
NASA Astrophysics Data System (ADS)
Schleijpen, Ric H. M. A.; van Putten, Frank J. M.
2016-10-01
This paper presents experimental work on the use of a CO2 laser for triggering of PIR sensors. Pyro-electric InfraRed sensors are often used as motion detectors for detection of moving persons or objects that are warmer than their environment. Apart from uses in the civilian domain, also applications in improvised weapons have been encountered. In such applications the PIR sensor triggers a weapon, when moving persons or vehicles are detected. A CO2 laser can be used to project a moving heat spot in front of the PIR, generating the same triggering effect as a real moving object. The goal of the research was to provide a basis for assessing the feasibility of the use of a CO2 laser as a countermeasure against PIR sensors. After a general introduction of the PIR sensing principle a theoretical and experimental analysis of the required power levels will be presented. Based on this quantitative analysis, a set up for indoor experiments to trigger the PIR devices remotely with a CO2 laser was prepared. Finally some selected results of the experiments will be presented. Implications for the use as a countermeasure will be discussed.
Lidar-based door and stair detection from a mobile robot
NASA Astrophysics Data System (ADS)
Bansal, Mayank; Southall, Ben; Matei, Bogdan; Eledath, Jayan; Sawhney, Harpreet
2010-04-01
We present an on-the-move LIDAR-based object detection system for autonomous and semi-autonomous unmanned vehicle systems. In this paper we make several contributions: (i) we describe an algorithm for real-time detection of objects such as doors and stairs in indoor environments; (ii) we describe efficient data structures and algorithms for processing 3D point clouds acquired by laser scanners in a streaming manner, which minimize the memory copying and access. We show qualitative results demonstrating the effectiveness of our approach on runs in an indoor office environment.
CCD Detects Two Images In Quick Succession
NASA Technical Reports Server (NTRS)
Janesick, James R.; Collins, Andy
1996-01-01
Prototype special-purpose charge-coupled device (CCD) designed to detect two 1,024 x 1,024-pixel images in rapid succession. Readout performed slowly to minimize noise. CCD operated in synchronism with pulsed laser, stroboscope, or other pulsed source of light to form pairs of images of rapidly moving objects.
Spatial Updating According to a Fixed Reference Direction of a Briefly Viewed Layout
ERIC Educational Resources Information Center
Zhang, Hui; Mou, Weimin; McNamara, Timothy P.
2011-01-01
Three experiments examined the role of reference directions in spatial updating. Participants briefly viewed an array of five objects. A non-egocentric reference direction was primed by placing a stick under two objects in the array at the time of learning. After a short interval, participants detected which object had been moved at a novel view…
A Novel Method of Localization for Moving Objects with an Alternating Magnetic Field
Gao, Xiang; Yan, Shenggang; Li, Bin
2017-01-01
Magnetic detection technology has wide applications in the fields of geological exploration, biomedical treatment, wreck removal and localization of unexploded ordinance. A large number of methods have been developed to locate targets with static magnetic fields, however, the relation between the problem of localization of moving objectives with alternating magnetic fields and the localization with a static magnetic field is rarely studied. A novel method of target localization based on coherent demodulation was proposed in this paper. The problem of localization of moving objects with an alternating magnetic field was transformed into the localization with a static magnetic field. The Levenberg-Marquardt (L-M) algorithm was applied to calculate the position of the target with magnetic field data measured by a single three-component magnetic sensor. Theoretical simulation and experimental results demonstrate the effectiveness of the proposed method. PMID:28430153
NASA Astrophysics Data System (ADS)
Puckett, Andrew W.
2007-08-01
I have compiled the Slow-Moving Object Catalog of Known minor planets and comets ("the SMOCK") by comparing the predicted positions of known bodies with those of sources detected by the Sloan Digital Sky Survey (SDSS) that lack positional counterparts at other survey epochs. For the ~50% of the SDSS footprint that has been imaged only once, I have used the Astrophysical Research Consortium's 3.5-meter telescope to obtain reference images for confirmation of Solar System membership. The SMOCK search effort includes all known objects with orbital semimajor axes a > 4.7 AU, as well as a comparison sample of inherently bright Main Belt asteroids. In fact, objects of all proper motions are included, resulting in substantial overlap with the SDSS Moving Object Catalog (MOC) and providing an important check on the inclusion criteria of both catalogs. The MOC does not contain any correctly-identified known objects with a > 12 AU, and also excludes a number of detections of Main Belt and Trojan asteroids that happen to be moving slowly as they enter or leave retrograde motion. The SMOCK catalog is a publicly-available product of this investigation. Having created this new database, I demonstrate some of its applications. The broad dispersion of color indices for transneptunian objects (TNOs) and Centaurs is confirmed, and their tight correlation in ( g - r ) vs ( r - i ) is explored. Repeat observations for more than 30 of these objects allow me to reject the collisional resurfacing scenario as the primary explanation for this broad variety of colors. Trojans with large orbital inclinations are found to have systematically redder colors than their low-inclination counterparts, but an excess of reddish low-inclination objects at L5 is identified. Next, I confirm that non-Plutino TNOs are redder with increasing perihelion distance, and that this effect is even more pronounced among the Classical TNOs. Finally, I take advantage of the byproducts of my search technique and attempt to recover objects with poorly-known orbits. I have drastically improved the current and future ephemeris uncertainties of 3 Trojan asteroids, and have increased by 20%-450% the observed arcs of 10 additional bodies.
Detecting Lateral Motion using Light's Orbital Angular Momentum.
Cvijetic, Neda; Milione, Giovanni; Ip, Ezra; Wang, Ting
2015-10-23
Interrogating an object with a light beam and analyzing the scattered light can reveal kinematic information about the object, which is vital for applications ranging from autonomous vehicles to gesture recognition and virtual reality. We show that by analyzing the change in the orbital angular momentum (OAM) of a tilted light beam eclipsed by a moving object, lateral motion of the object can be detected in an arbitrary direction using a single light beam and without object image reconstruction. We observe OAM spectral asymmetry that corresponds to the lateral motion direction along an arbitrary axis perpendicular to the plane containing the light beam and OAM measurement axes. These findings extend OAM-based remote sensing to detection of non-rotational qualities of objects and may also have extensions to other electromagnetic wave regimes, including radio and sound.
Detecting Lateral Motion using Light’s Orbital Angular Momentum
Cvijetic, Neda; Milione, Giovanni; Ip, Ezra; Wang, Ting
2015-01-01
Interrogating an object with a light beam and analyzing the scattered light can reveal kinematic information about the object, which is vital for applications ranging from autonomous vehicles to gesture recognition and virtual reality. We show that by analyzing the change in the orbital angular momentum (OAM) of a tilted light beam eclipsed by a moving object, lateral motion of the object can be detected in an arbitrary direction using a single light beam and without object image reconstruction. We observe OAM spectral asymmetry that corresponds to the lateral motion direction along an arbitrary axis perpendicular to the plane containing the light beam and OAM measurement axes. These findings extend OAM-based remote sensing to detection of non-rotational qualities of objects and may also have extensions to other electromagnetic wave regimes, including radio and sound. PMID:26493681
A sparse representation-based approach for copy-move image forgery detection in smooth regions
NASA Astrophysics Data System (ADS)
Abdessamad, Jalila; ElAdel, Asma; Zaied, Mourad
2017-03-01
Copy-move image forgery is the act of cloning a restricted region in the image and pasting it once or multiple times within that same image. This procedure intends to cover a certain feature, probably a person or an object, in the processed image or emphasize it through duplication. Consequences of this malicious operation can be unexpectedly harmful. Hence, the present paper proposes a new approach that automatically detects Copy-move Forgery (CMF). In particular, this work broaches a widely common open issue in CMF research literature that is detecting CMF within smooth areas. Indeed, the proposed approach represents the image blocks as a sparse linear combination of pre-learned bases (a mixture of texture and color-wise small patches) which allows a robust description of smooth patches. The reported experimental results demonstrate the effectiveness of the proposed approach in identifying the forged regions in CM attacks.
NASA Astrophysics Data System (ADS)
Barbarossa, S.; Farina, A.
A novel scheme for detecting moving targets with synthetic aperture radar (SAR) is presented. The proposed approach is based on the use of the Wigner-Ville distribution (WVD) for simultaneously detecting moving targets and estimating their motion kinematic parameters. The estimation plays a key role for focusing the target and correctly locating it with respect to the stationary background. The method has a number of advantages: (i) the detection is efficiently performed on the samples in the time-frequency domain, provided the WVD, without resorting to the use of a bank of filters, each one matched to possible values of the unknown target motion parameters; (ii) the estimation of the target motion parameters can be done on the same time-frequency domain by locating the line where the maximum energy of the WVD is concentrated. A validation of the approach is given by both analytical and simulation means. In addition, the estimation of the target kinematic parameters and the corresponding image focusing are also demonstrated.
Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects.
Kang, Ziho; Mandal, Saptarshi; Crutchfield, Jerry; Millan, Angel; McClung, Sarah N
2016-01-01
Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance.
Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects
Mandal, Saptarshi
2016-01-01
Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance. PMID:27725830
Tracking moving targets behind a scattering medium via speckle correlation.
Guo, Chengfei; Liu, Jietao; Wu, Tengfei; Zhu, Lei; Shao, Xiaopeng
2018-02-01
Tracking moving targets behind a scattering medium is a challenge, and it has many important applications in various fields. Owing to the multiple scattering, instead of the object image, only a random speckle pattern can be received on the camera when light is passing through highly scattering layers. Significantly, an important feature of a speckle pattern has been found, and it showed the target information can be derived from the speckle correlation. In this work, inspired by the notions used in computer vision and deformation detection, by specific simulations and experiments, we demonstrate a simple object tracking method, in which by using the speckle correlation, the movement of a hidden object can be tracked in the lateral direction and axial direction. In addition, the rotation state of the moving target can also be recognized by utilizing the autocorrelation of a speckle. This work will be beneficial for biomedical applications in the fields of quantitative analysis of the working mechanisms of a micro-object and the acquisition of dynamical information of the micro-object motion.
Human recognition based on head-shoulder contour extraction and BP neural network
NASA Astrophysics Data System (ADS)
Kong, Xiao-fang; Wang, Xiu-qin; Gu, Guohua; Chen, Qian; Qian, Wei-xian
2014-11-01
In practical application scenarios like video surveillance and human-computer interaction, human body movements are uncertain because the human body is a non-rigid object. Based on the fact that the head-shoulder part of human body can be less affected by the movement, and will seldom be obscured by other objects, in human detection and recognition, a head-shoulder model with its stable characteristics can be applied as a detection feature to describe the human body. In order to extract the head-shoulder contour accurately, a head-shoulder model establish method with combination of edge detection and the mean-shift algorithm in image clustering has been proposed in this paper. First, an adaptive method of mixture Gaussian background update has been used to extract targets from the video sequence. Second, edge detection has been used to extract the contour of moving objects, and the mean-shift algorithm has been combined to cluster parts of target's contour. Third, the head-shoulder model can be established, according to the width and height ratio of human head-shoulder combined with the projection histogram of the binary image, and the eigenvectors of the head-shoulder contour can be acquired. Finally, the relationship between head-shoulder contour eigenvectors and the moving objects will be formed by the training of back-propagation (BP) neural network classifier, and the human head-shoulder model can be clustered for human detection and recognition. Experiments have shown that the method combined with edge detection and mean-shift algorithm proposed in this paper can extract the complete head-shoulder contour, with low calculating complexity and high efficiency.
Person detection and tracking with a 360° lidar system
NASA Astrophysics Data System (ADS)
Hammer, Marcus; Hebel, Marcus; Arens, Michael
2017-10-01
Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks. In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed. The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.
Cyber-Physical Attacks With Control Objectives
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-08-18
This study studies attackers with control objectives against cyber-physical systems (CPSs). The goal of the attacker is to counteract the CPS's controller and move the system to a target state while evading detection. We formulate a cost function that reflects the attacker's goals, and, using dynamic programming, we show that the optimal attack strategy reduces to a linear feedback of the attacker's state estimate. By changing the parameters of the cost function, we show how an attacker can design optimal attacks to balance the control objective and the detection avoidance objective. In conclusion, we provide a numerical illustration based onmore » a remotely controlled helicopter under attack.« less
Cyber-Physical Attacks With Control Objectives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
This study studies attackers with control objectives against cyber-physical systems (CPSs). The goal of the attacker is to counteract the CPS's controller and move the system to a target state while evading detection. We formulate a cost function that reflects the attacker's goals, and, using dynamic programming, we show that the optimal attack strategy reduces to a linear feedback of the attacker's state estimate. By changing the parameters of the cost function, we show how an attacker can design optimal attacks to balance the control objective and the detection avoidance objective. In conclusion, we provide a numerical illustration based onmore » a remotely controlled helicopter under attack.« less
Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.
Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing
2017-08-23
Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.
Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor
Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing
2017-01-01
Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520
Dual beam optical interferometer
NASA Technical Reports Server (NTRS)
Gutierrez, Roman C. (Inventor)
2003-01-01
A dual beam interferometer device is disclosed that enables moving an optics module in a direction, which changes the path lengths of two beams of light. The two beams reflect off a surface of an object and generate different speckle patterns detected by an element, such as a camera. The camera detects a characteristic of the surface.
An Approach to Extract Moving Objects from Mls Data Using a Volumetric Background Representation
NASA Astrophysics Data System (ADS)
Gehrung, J.; Hebel, M.; Arens, M.; Stilla, U.
2017-05-01
Data recorded by mobile LiDAR systems (MLS) can be used for the generation and refinement of city models or for the automatic detection of long-term changes in the public road space. Since for this task only static structures are of interest, all mobile objects need to be removed. This work presents a straightforward but powerful approach to remove the subclass of moving objects. A probabilistic volumetric representation is utilized to separate MLS measurements recorded by a Velodyne HDL-64E into mobile objects and static background. The method was subjected to a quantitative and a qualitative examination using multiple datasets recorded by a mobile mapping platform. The results show that depending on the chosen octree resolution 87-95% of the measurements are labeled correctly.
Velocity measurement by vibro-acoustic Doppler.
Nabavizadeh, Alireza; Urban, Matthew W; Kinnick, Randall R; Fatemi, Mostafa
2012-04-01
We describe the theoretical principles of a new Doppler method, which uses the acoustic response of a moving object to a highly localized dynamic radiation force of the ultrasound field to calculate the velocity of the moving object according to Doppler frequency shift. This method, named vibro-acoustic Doppler (VAD), employs two ultrasound beams separated by a slight frequency difference, Δf, transmitting in an X-focal configuration. Both ultrasound beams experience a frequency shift because of the moving objects and their interaction at the joint focal zone produces an acoustic frequency shift occurring around the low-frequency (Δf) acoustic emission signal. The acoustic emission field resulting from the vibration of the moving object is detected and used to calculate its velocity. We report the formula that describes the relation between Doppler frequency shift of the emitted acoustic field and the velocity of the moving object. To verify the theory, we used a string phantom. We also tested our method by measuring fluid velocity in a tube. The results show that the error calculated for both string and fluid velocities is less than 9.1%. Our theory shows that in the worst case, the error is 0.54% for a 25° angle variation for the VAD method compared with an error of -82.6% for a 25° angle variation for a conventional continuous wave Doppler method. An advantage of this method is that, unlike conventional Doppler, it is not sensitive to angles between the ultrasound beams and direction of motion.
NASA Astrophysics Data System (ADS)
Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian
2015-09-01
As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.
Assessing the performance of a motion tracking system based on optical joint transform correlation
NASA Astrophysics Data System (ADS)
Elbouz, M.; Alfalou, A.; Brosseau, C.; Ben Haj Yahia, N.; Alam, M. S.
2015-08-01
We present an optimized system specially designed for the tracking and recognition of moving subjects in a confined environment (such as an elderly remaining at home). In the first step of our study, we use a VanderLugt correlator (VLC) with an adapted pre-processing treatment of the input plane and a postprocessing of the correlation plane via a nonlinear function allowing us to make a robust decision. The second step is based on an optical joint transform correlation (JTC)-based system (NZ-NL-correlation JTC) for achieving improved detection and tracking of moving persons in a confined space. The proposed system has been found to have significantly superior discrimination and robustness capabilities allowing to detect an unknown target in an input scene and to determine the target's trajectory when this target is in motion. This system offers robust tracking performance of a moving target in several scenarios, such as rotational variation of input faces. Test results obtained using various real life video sequences show that the proposed system is particularly suitable for real-time detection and tracking of moving objects.
Modeling and Simulation Architecture for Studying Doppler-Based Radar with Complex Environments
2009-03-26
structures can interfere with a radar’s ability to detect moving aircraft because radar returns from turbines are comparable to those from slow flying...Netherlands Organisation for Applied Scientific Research . 13 EM Electromagnetic . . . . . . . . . . . . . . . . . . . . . . . 14 MTI Moving Target Indicator...ensure the turbine won’t interact with the radar. However, (2.3) doesn’t account for terrain masking or shadowing. If there is a tall object or terrain
DDC Systems for Searching for Near-Earth Asteroids
NASA Technical Reports Server (NTRS)
Harris, A.
1994-01-01
Large format CCD systems are superior to photographic systems in terms of quantum efficiency and that they yield digital output directly, which can be computer analyzed to detect moving objects and to obtain astrometric measurements.
Long-term scale adaptive tracking with kernel correlation filters
NASA Astrophysics Data System (ADS)
Wang, Yueren; Zhang, Hong; Zhang, Lei; Yang, Yifan; Sun, Mingui
2018-04-01
Object tracking in video sequences has broad applications in both military and civilian domains. However, as the length of input video sequence increases, a number of problems arise, such as severe object occlusion, object appearance variation, and object out-of-view (some portion or the entire object leaves the image space). To deal with these problems and identify the object being tracked from cluttered background, we present a robust appearance model using Speeded Up Robust Features (SURF) and advanced integrated features consisting of the Felzenszwalb's Histogram of Oriented Gradients (FHOG) and color attributes. Since re-detection is essential in long-term tracking, we develop an effective object re-detection strategy based on moving area detection. We employ the popular kernel correlation filters in our algorithm design, which facilitates high-speed object tracking. Our evaluation using the CVPR2013 Object Tracking Benchmark (OTB2013) dataset illustrates that the proposed algorithm outperforms reference state-of-the-art trackers in various challenging scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shao, Michael; Nemati, Bijan; Zhai, Chengxing
We present an approach that significantly increases the sensitivity for finding and tracking small and fast near-Earth asteroids (NEAs). This approach relies on a combined use of a new generation of high-speed cameras which allow short, high frame-rate exposures of moving objects, effectively 'freezing' their motion, and a computationally enhanced implementation of the 'shift-and-add' data processing technique that helps to improve the signal-to-noise ratio (SNR) for detection of NEAs. The SNR of a single short exposure of a dim NEA is insufficient to detect it in one frame, but by computationally searching for an appropriate velocity vector, shifting successive framesmore » relative to each other and then co-adding the shifted frames in post-processing, we synthetically create a long-exposure image as if the telescope were tracking the object. This approach, which we call 'synthetic tracking,' enhances the familiar shift-and-add technique with the ability to do a wide blind search, detect, and track dim and fast-moving NEAs in near real time. We discuss also how synthetic tracking improves the astrometry of fast-moving NEAs. We apply this technique to observations of two known asteroids conducted on the Palomar 200 inch telescope and demonstrate improved SNR and 10 fold improvement of astrometric precision over the traditional long-exposure approach. In the past 5 yr, about 150 NEAs with absolute magnitudes H = 28 (∼10 m in size) or fainter have been discovered. With an upgraded version of our camera and a field of view of (28 arcmin){sup 2} on the Palomar 200 inch telescope, synthetic tracking could allow detecting up to 180 such objects per night, including very small NEAs with sizes down to 7 m.« less
NASA Technical Reports Server (NTRS)
Arndt, G. Dickey (Inventor); Carl, James R. (Inventor)
2001-01-01
A method is provided for controlling two objects relatively moveable with respect to each other. A plurality of receivers are provided for detecting a distinctive microwave signal from each of the objects and measuring the phase thereof with respect to a reference signal. The measured phase signal is used to determine a distance between each of the objects and each of the plurality of receivers. Control signals produced in response to the relative distances are used to control the position of the two objects.
Automatic Recognition Of Moving Objects And Its Application To A Robot For Picking Asparagus
NASA Astrophysics Data System (ADS)
Baylou, P.; Amor, B. El Hadj; Bousseau, G.
1983-10-01
After a brief description of the robot for picking white asparagus, a statistical study of the different shapes of asparagus tips allowed us to determine certain discriminating parameters to detect the tips as they appear on the silhouette of the mound of earth. The localisation was done stereometrically with the help of two cameras. As the robot carrying the system of vision-localisation moves, the images are altered and decision cri-teria modified. A study of the image from mobile objects produced by both tube and CCD came-ras was carried out. A simulation of this phenomenon has been achieved in order to determine the modifications concerning object shapes, thresholding levels and decision parameters in function of the robot speed.
NASA Technical Reports Server (NTRS)
Tescher, Andrew G. (Editor)
1989-01-01
Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.
Interaction of compass sensing and object-motion detection in the locust central complex.
Bockhorst, Tobias; Homberg, Uwe
2017-07-01
Goal-directed behavior is often complicated by unpredictable events, such as the appearance of a predator during directed locomotion. This situation requires adaptive responses like evasive maneuvers followed by subsequent reorientation and course correction. Here we study the possible neural underpinnings of such a situation in an insect, the desert locust. As in other insects, its sense of spatial orientation strongly relies on the central complex, a group of midline brain neuropils. The central complex houses sky compass cells that signal the polarization plane of skylight and thus indicate the animal's steering direction relative to the sun. Most of these cells additionally respond to small moving objects that drive fast sensory-motor circuits for escape. Here we investigate how the presentation of a moving object influences activity of the neurons during compass signaling. Cells responded in one of two ways: in some neurons, responses to the moving object were simply added to the compass response that had adapted during continuous stimulation by stationary polarized light. By contrast, other neurons disadapted, i.e., regained their full compass response to polarized light, when a moving object was presented. We propose that the latter case could help to prepare for reorientation of the animal after escape. A neuronal network based on central-complex architecture can explain both responses by slight changes in the dynamics and amplitudes of adaptation to polarized light in CL columnar input neurons of the system. NEW & NOTEWORTHY Neurons of the central complex in several insects signal compass directions through sensitivity to the sky polarization pattern. In locusts, these neurons also respond to moving objects. We show here that during polarized-light presentation, responses to moving objects override their compass signaling or restore adapted inhibitory as well as excitatory compass responses. A network model is presented to explain the variations of these responses that likely serve to redirect flight or walking following evasive maneuvers. Copyright © 2017 the American Physiological Society.
Near-Earth Asteroid Tracking (NEAT): First Year Results
NASA Astrophysics Data System (ADS)
Helin, E. F.; Rabinowitz, D. L.; Pravdo, S. H.; Lawrence, K. J.
1997-07-01
The successful detection of Near-Earth Asteroids (NEAs) has been demonstrated by the Near-Earth Asteroid Tracking (NEAT) program at the Jet Propulsion Laboratory during its first year of operation. The NEAT CCD camera system is installed on the U. S. Air Force 1-m GEODSS telescope in Maui. Using state-of-the-art software and hardware, the system initiates nightly transmitted observing script from JPL, moves the telescopes for successive exposures of the selected fields, detects moving objects as faint as V=20.5 in 40 s exposures, determines their astrometric positions, and downloads the data for review at JPL in the morning. The NEAT system is detecting NEAs larger than 200m, comets, and other unique objects at a rate competitive with current operating systems, and bright enough for important physical studies on moderate-sized telescopes. NEAT has detected over 10,000 asteroids over a wide range of magnitudes, demonstrating the excellent capability of the NEAT system. Fifty-five percent of the detections are new objects and over 900 of them have been followed on a second night to receive designation from the Minor Planet Center. 14 NEAs (9 Amors, 4 Apollos, and 1 Aten) have been discovered since March 1996. Also, 2 long period comets and 1996 PW, an asteroidal object with an orbit of a long-period comet, with an eccentricity of 0.992 and orbital period of 5900 years. Program discoveries will be reviewed along with analysis of results pertaining to the discovery efficiency, distribution on the sky, range of orbits and magnitudes. Related abstract: Lawrence, K., et al., 1997 DPS
Flow detection via sparse frame analysis for suspicious event recognition in infrared imagery
NASA Astrophysics Data System (ADS)
Fernandes, Henrique C.; Batista, Marcos A.; Barcelos, Celia A. Z.; Maldague, Xavier P. V.
2013-05-01
It is becoming increasingly evident that intelligent systems are very bene¯cial for society and that the further development of such systems is necessary to continue to improve society's quality of life. One area that has drawn the attention of recent research is the development of automatic surveillance systems. In our work we outline a system capable of monitoring an uncontrolled area (an outside parking lot) using infrared imagery and recognizing suspicious events in this area. The ¯rst step is to identify moving objects and segment them from the scene's background. Our approach is based on a dynamic background-subtraction technique which robustly adapts detection to illumination changes. It is analyzed only regions where movement is occurring, ignoring in°uence of pixels from regions where there is no movement, to segment moving objects. Regions where movement is occurring are identi¯ed using °ow detection via sparse frame analysis. During the tracking process the objects are classi¯ed into two categories: Persons and Vehicles, based on features such as size and velocity. The last step is to recognize suspicious events that may occur in the scene. Since the objects are correctly segmented and classi¯ed it is possible to identify those events using features such as velocity and time spent motionless in one spot. In this paper we recognize the suspicious event suspicion of object(s) theft from inside a parked vehicle at spot X by a person" and results show that the use of °ow detection increases the recognition of this suspicious event from 78:57% to 92:85%.
Aerial vehicles collision avoidance using monocular vision
NASA Astrophysics Data System (ADS)
Balashov, Oleg; Muraviev, Vadim; Strotov, Valery
2016-10-01
In this paper image-based collision avoidance algorithm that provides detection of nearby aircraft and distance estimation is presented. The approach requires a vision system with a single moving camera and additional information about carrier's speed and orientation from onboard sensors. The main idea is to create a multi-step approach based on a preliminary detection, regions of interest (ROI) selection, contour segmentation, object matching and localization. The proposed algorithm is able to detect small targets but unlike many other approaches is designed to work with large-scale objects as well. To localize aerial vehicle position the system of equations relating object coordinates in space and observed image is solved. The system solution gives the current position and speed of the detected object in space. Using this information distance and time to collision can be estimated. Experimental research on real video sequences and modeled data is performed. Video database contained different types of aerial vehicles: aircrafts, helicopters, and UAVs. The presented algorithm is able to detect aerial vehicles from several kilometers under regular daylight conditions.
Fan filters, the 3-D Radon transform, and image sequence analysis.
Marzetta, T L
1994-01-01
This paper develops a theory for the application of fan filters to moving objects. In contrast to previous treatments of the subject based on the 3-D Fourier transform, simplicity and insight are achieved by using the 3-D Radon transform. With this point of view, the Radon transform decomposes the image sequence into a set of plane waves that are parameterized by a two-component slowness vector. Fan filtering is equivalent to a multiplication in the Radon transform domain by a slowness response function, followed by an inverse Radon transform. The plane wave representation of a moving object involves only a restricted set of slownesses such that the inner product of the plane wave slowness vector and the moving object velocity vector is equal to one. All of the complexity in the application of fan filters to image sequences results from the velocity-slowness mapping not being one-to-one; therefore, the filter response cannot be independently specified at all velocities. A key contribution of this paper is to elucidate both the power and the limitations of fan filtering in this new application. A potential application of 3-D fan filters is in the detection of moving targets in clutter and noise. For example, an appropriately designed fan filter can reject perfectly all moving objects whose speed, irrespective of heading, is less than a specified cut-off speed, with only minor attenuation of significantly faster objects. A simple geometric construction determines the response of the filter for speeds greater than the cut-off speed.
Optimal Detection Range of RFID Tag for RFID-based Positioning System Using the k-NN Algorithm.
Han, Soohee; Kim, Junghwan; Park, Choung-Hwan; Yoon, Hee-Cheon; Heo, Joon
2009-01-01
Positioning technology to track a moving object is an important and essential component of ubiquitous computing environments and applications. An RFID-based positioning system using the k-nearest neighbor (k-NN) algorithm can determine the position of a moving reader from observed reference data. In this study, the optimal detection range of an RFID-based positioning system was determined on the principle that tag spacing can be derived from the detection range. It was assumed that reference tags without signal strength information are regularly distributed in 1-, 2- and 3-dimensional spaces. The optimal detection range was determined, through analytical and numerical approaches, to be 125% of the tag-spacing distance in 1-dimensional space. Through numerical approaches, the range was 134% in 2-dimensional space, 143% in 3-dimensional space.
NASA Technical Reports Server (NTRS)
Whitaker, Ross (Inventor); Turner, D. Clark (Inventor)
2016-01-01
Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.
Using articulated scene models for dynamic 3d scene analysis in vista spaces
NASA Astrophysics Data System (ADS)
Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven
2010-09-01
In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.
Human Location Detection System Using Micro-Electromechanical Sensor for Intelligent Fan
NASA Astrophysics Data System (ADS)
Parnin, S.; Rahman, M. M.
2017-03-01
This paper presented the development of sensory system for detection of both the presence and the location of human in a room spaces using MEMS Thermal sensor. The system is able to detect the surface temperature of occupants by a non-contact detection at the maximum of 6 meters far. It can be integrated to any swing type of electrical appliances such as standing fan or a similar devices. Differentiating human from other moving and or static object by heat variable is nearly impossible since human, animals and electrical appliances produce heat. The uncontrollable heat properties which can change and transfer will add to the detection issue. Integrating the low cost MEMS based thermal sensor can solve the first of human sensing problem by its ability to detect human in stationary. Further discrimination and analysis must therefore be made to the measured temperature data to distinguish human from other objects. In this project, the fan is properly designed and program in such a way that it can adapt to different events starting from the human sensing stage to its dynamic and mechanical moving parts. Up to this stage initial testing to the Omron D6T microelectromechanical thermal sensor is currently under several experimental stages. Experimental result of the sensor tested on stationary and motion state of human are behaviorally differentiable and successfully locate the human position by detecting the maximum temperature of each sensor reading.
Depth-color fusion strategy for 3-D scene modeling with Kinect.
Camplani, Massimo; Mantecon, Tomas; Salgado, Luis
2013-12-01
Low-cost depth cameras, such as Microsoft Kinect, have completely changed the world of human-computer interaction through controller-free gaming applications. Depth data provided by the Kinect sensor presents several noise-related problems that have to be tackled to improve the accuracy of the depth data, thus obtaining more reliable game control platforms and broadening its applicability. In this paper, we present a depth-color fusion strategy for 3-D modeling of indoor scenes with Kinect. Accurate depth and color models of the background elements are iteratively built, and used to detect moving objects in the scene. Kinect depth data is processed with an innovative adaptive joint-bilateral filter that efficiently combines depth and color by analyzing an edge-uncertainty map and the detected foreground regions. Results show that the proposed approach efficiently tackles main Kinect data problems: distance-dependent depth maps, spatial noise, and temporal random fluctuations are dramatically reduced; objects depth boundaries are refined, and nonmeasured depth pixels are interpolated. Moreover, a robust depth and color background model and accurate moving objects silhouette are generated.
A method for real time detecting of non-uniform magnetic field
NASA Astrophysics Data System (ADS)
Marusenkov, Andriy
2015-04-01
The principle of measuring magnetic signatures for observing diverse objects is widely used in Near Surface work (unexploded ordnance (UXO); engineering & environmental; archaeology) and security and vehicle detection systems as well. As a rule, the magnitude of the signals to be measured is much lower than that of the quasi-uniform Earth magnetic field. Usually magnetometers for these purposes contain two or more spatially separated sensors to estimate the full tensor gradient of the magnetic field or, more frequently, only partial gradient components. The both types (scalar and vector) of magnetic sensors could be used. The identity of the scale factors and proper alignment of the sensitivity axes of the vector sensors are very important for deep suppression of the ambient field and detection of weak target signals. As a rule, the periodical calibration procedure is used to keep matching sensors' parameters as close as possible. In the present report we propose the technique for detection magnetic anomalies, which is almost insensitive to imperfect matching of the sensors. This method based on the idea that the difference signals between two sensors are considerably different when the instrument is rotated or moved in uniform and non-uniform fields. Due to the misfit of calibration parameters the difference signal observed at the rotation in the uniform field is similar to the total signal - the sum of the signals of both sensors. Zero change of the difference and total signals is expected, if the instrument moves in the uniform field along a straight line. In contrast, the same move in the non-uniform field produces some response of each of the sensors. In case one measures dB/dx and moves along x direction, the sensors signals is shifted in time with the lag proportional to the distance between sensors and the speed of move. It means that the difference signal looks like derivative of the total signal at move in the non-uniform field. So, using quite simple electronic schematic it is possible to detect the lag between the total and difference signals and to trigger alarms, when the instrument passes near a magnetized object. The proposed method was successfully applied in the two instruments: the low-power search coil magnetometer for vehicle detection system and the low-noise flux-gate magnetometer for magnetocardiograph. Author believes that this approach could be also useful for the fast inspection of the area during the engineering, archaeology, UXO surveys.
Optimizing Sampling Design to Deal with Mist-Net Avoidance in Amazonian Birds and Bats
Marques, João Tiago; Ramos Pereira, Maria J.; Marques, Tiago A.; Santos, Carlos David; Santana, Joana; Beja, Pedro; Palmeirim, Jorge M.
2013-01-01
Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas. PMID:24058579
Digital image modification detection using color information and its histograms.
Zhou, Haoyu; Shen, Yue; Zhu, Xinghui; Liu, Bo; Fu, Zigang; Fan, Na
2016-09-01
The rapid development of many open source and commercial image editing software makes the authenticity of the digital images questionable. Copy-move forgery is one of the most widely used tampering techniques to create desirable objects or conceal undesirable objects in a scene. Existing techniques reported in the literature to detect such tampering aim to improve the robustness of these methods against the use of JPEG compression, blurring, noise, or other types of post processing operations. These post processing operations are frequently used with the intention to conceal tampering and reduce tampering clues. A robust method based on the color moments and other five image descriptors is proposed in this paper. The method divides the image into fixed size overlapping blocks. Clustering operation divides entire search space into smaller pieces with similar color distribution. Blocks from the tampered regions will reside within the same cluster since both copied and moved regions have similar color distributions. Five image descriptors are used to extract block features, which makes the method more robust to post processing operations. An ensemble of deep compositional pattern-producing neural networks are trained with these extracted features. Similarity among feature vectors in clusters indicates possible forged regions. Experimental results show that the proposed method can detect copy-move forgery even if an image was distorted by gamma correction, addictive white Gaussian noise, JPEG compression, or blurring. Copyright © 2016. Published by Elsevier Ireland Ltd.
Machine-learning-based real-bogus system for the HSC-SSP moving object detection pipeline
NASA Astrophysics Data System (ADS)
Lin, Hsing-Wen; Chen, Ying-Tung; Wang, Jen-Hung; Wang, Shiang-Yu; Yoshida, Fumi; Ip, Wing-Huen; Miyazaki, Satoshi; Terai, Tsuyoshi
2018-01-01
Machine-learning techniques are widely applied in many modern optical sky surveys, e.g., Pan-STARRS1, PTF/iPTF, and the Subaru/Hyper Suprime-Cam survey, to reduce human intervention in data verification. In this study, we have established a machine-learning-based real-bogus system to reject false detections in the Subaru/Hyper-Suprime-Cam Strategic Survey Program (HSC-SSP) source catalog. Therefore, the HSC-SSP moving object detection pipeline can operate more effectively due to the reduction of false positives. To train the real-bogus system, we use stationary sources as the real training set and "flagged" data as the bogus set. The training set contains 47 features, most of which are photometric measurements and shape moments generated from the HSC image reduction pipeline (hscPipe). Our system can reach a true positive rate (tpr) ˜96% with a false positive rate (fpr) ˜1% or tpr ˜99% at fpr ˜5%. Therefore, we conclude that stationary sources are decent real training samples, and using photometry measurements and shape moments can reject false positives effectively.
Detection of a faint fast-moving near-Earth asteroid using the synthetic tracking technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Chengxing; Shao, Michael; Nemati, Bijan
We report a detection of a faint near-Earth asteroid (NEA) using our synthetic tracking technique and the CHIMERA instrument on the Palomar 200 inch telescope. With an apparent magnitude of 23 (H = 29, assuming detection at 20 lunar distances), the asteroid was moving at 6.°32 day{sup –1} and was detected at a signal-to-noise ratio (S/N) of 15 using 30 s of data taken at a 16.7 Hz frame rate. The detection was confirmed by a second observation 77 minutes later at the same S/N. Because of its high proper motion, the NEA moved 7 arcsec over the 30 smore » of observation. Synthetic tracking avoided image degradation due to trailing loss that affects conventional techniques relying on 30 s exposures; the trailing loss would have degraded the surface brightness of the NEA image on the CCD down to an approximate magnitude of 25 making the object undetectable. This detection was a result of our 12 hr blind search conducted on the Palomar 200 inch telescope over two nights, scanning twice over six (5.°3 × 0.°046) fields. Detecting only one asteroid is consistent with Harris's estimates for the distribution of the asteroid population, which was used to predict a detection of 1.2 NEAs in the H-magnitude range 28-31 for the two nights. The experimental design, data analysis methods, and algorithms are presented. We also demonstrate milliarcsecond-level astrometry using observations of two known bright asteroids on the same system with synthetic tracking. We conclude by discussing strategies for scheduling observations to detect and characterize small and fast-moving NEAs using the new technique.« less
An integrated framework for detecting suspicious behaviors in video surveillance
NASA Astrophysics Data System (ADS)
Zin, Thi Thi; Tin, Pyke; Hama, Hiromitsu; Toriu, Takashi
2014-03-01
In this paper, we propose an integrated framework for detecting suspicious behaviors in video surveillance systems which are established in public places such as railway stations, airports, shopping malls and etc. Especially, people loitering in suspicion, unattended objects left behind and exchanging suspicious objects between persons are common security concerns in airports and other transit scenarios. These involve understanding scene/event, analyzing human movements, recognizing controllable objects, and observing the effect of the human movement on those objects. In the proposed framework, multiple background modeling technique, high level motion feature extraction method and embedded Markov chain models are integrated for detecting suspicious behaviors in real time video surveillance systems. Specifically, the proposed framework employs probability based multiple backgrounds modeling technique to detect moving objects. Then the velocity and distance measures are computed as the high level motion features of the interests. By using an integration of the computed features and the first passage time probabilities of the embedded Markov chain, the suspicious behaviors in video surveillance are analyzed for detecting loitering persons, objects left behind and human interactions such as fighting. The proposed framework has been tested by using standard public datasets and our own video surveillance scenarios.
Vision-based object detection and recognition system for intelligent vehicles
NASA Astrophysics Data System (ADS)
Ran, Bin; Liu, Henry X.; Martono, Wilfung
1999-01-01
Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.
Kinematic Age Estimates for Four Compact Symmetric Objects from the Pearson-Readhead Survey
NASA Astrophysics Data System (ADS)
Taylor, G. B.; Marr, J. M.; Pearson, T. J.; Readhead, A. C. S.
2000-09-01
Based on multiepoch observations at 15 and 43 GHz with the Very Long Baseline Array (VLBA), we detect significant angular expansions between the two hot spots of four compact symmetric objects (CSOs). From these relative motions we derive kinematic ages of between 300 and 1200 yr for the radio emission. These ages lend support to the idea that CSOs are produced in a recent phase of activity. These observations also allow us to study the evolution of the hot spots dynamically in individual sources. In all four sources the hot spots are separating along the source axis, but in 1031+567 the tip of the hot spot appears to be moving almost orthogonally to the source axis. Jet components, seen in three of the four sources observed, are found to be moving relativistically outward from the central engines toward the more slowly moving hot spots.
Image registration of naval IR images
NASA Astrophysics Data System (ADS)
Rodland, Arne J.
1996-06-01
In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.
A serendipitous all sky survey for bright objects in the outer solar system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, M. E.; Drake, A. J.; Djorgovski, S. G.
2015-02-01
We use seven year's worth of observations from the Catalina Sky Survey and the Siding Spring Survey covering most of the northern and southern hemisphere at galactic latitudes higher than 20° to search for serendipitously imaged moving objects in the outer solar system. These slowly moving objects would appear as stationary transients in these fast cadence asteroids surveys, so we develop methods to discover objects in the outer solar system using individual observations spaced by months, rather than spaced by hours, as is typically done. While we independently discover eight known bright objects in the outer solar system, the faintestmore » having V=19.8±0.1, no new objects are discovered. We find that the survey is nearly 100% efficient at detecting objects beyond 25 AU for V≲19.1 (V≲18.6 in the southern hemisphere) and that the probability that there is one or more remaining outer solar system object of this brightness left to be discovered in the unsurveyed regions of the galactic plane is approximately 32%.« less
Discovery of the candidate Kuiper belt object 1992 QB1
NASA Astrophysics Data System (ADS)
Jewitt, D.; Luu, J.
1993-04-01
The discovery of a new faint object in the outer solar system, 1992 QB1, moving beyond the orbit of Neptune is reported. It is suggested that the 1992 QB1 may represent the first detection of a member of the Kuiper belt (Edgworth, 1949; Kuiper, 1951), the hypothesized population of objects beyond Neptune and a possible source of the short-period comets, as suggested by Whipple (1964), Fernandez (1980), and Duncan et al. (1988).
Multisensor data fusion for IED threat detection
NASA Astrophysics Data System (ADS)
Mees, Wim; Heremans, Roel
2012-10-01
In this paper we present the multi-sensor registration and fusion algorithms that were developed for a force protection research project in order to detect threats against military patrol vehicles. The fusion is performed at object level, using a hierarchical evidence aggregation approach. It first uses expert domain knowledge about the features used to characterize the detected threats, that is implemented in the form of a fuzzy expert system. The next level consists in fusing intra-sensor and inter-sensor information. Here an ordered weighted averaging operator is used. The object level fusion between candidate threats that are detected asynchronously on a moving vehicle by sensors with different imaging geometries, requires an accurate sensor to world coordinate transformation. This image registration will also be discussed in this paper.
Vibration Measurement Method of a String in Transversal Motion by Using a PSD.
Yang, Che-Hua; Wu, Tai-Chieh
2017-07-17
A position sensitive detector (PSD) is frequently used for the measurement of a one-dimensional position along a line or a two-dimensional position on a plane, but is more often used for measuring static or quasi-static positions. Along with its quick response when measuring short time-spans in the micro-second realm, a PSD is also capable of detecting the dynamic positions of moving objects. In this paper, theoretical modeling and experiments are conducted to explore the frequency characteristics of a vibrating string while moving transversely across a one-dimensional PSD. The theoretical predictions are supported by the experiments. When the string vibrates at its natural frequency while moving transversely, the PSD will detect two frequencies near this natural frequency; one frequency is higher than the natural frequency and the other is lower. Deviations in these two frequencies, which differ from the string's natural frequency, increase while the speed of motion increases.
Binaries among low-mass stars in nearby young moving groups
NASA Astrophysics Data System (ADS)
Janson, Markus; Durkan, Stephen; Hippler, Stefan; Dai, Xiaolin; Brandner, Wolfgang; Schlieder, Joshua; Bonnefoy, Mickaël; Henning, Thomas
2017-03-01
The solar galactic neighborhood contains a number of young co-moving associations of stars (known as young moving groups) with ages of 10-150 Myr, which are prime targets for a range of scientific studies, including direct imaging planet searches. The late-type stellar populations of such groups still remain in their pre-main sequence phase, and are thus well suited for purposes such as isochronal dating. Close binaries are particularly useful in this regard since they allow for a model-independent dynamical mass determination. Here we present a dedicated effort to identify new close binaries in nearby young moving groups, through high-resolution imaging with the AstraLux Sur Lucky Imaging camera. We surveyed 181 targets, resulting in the detection of 61 companions or candidates, of which 38 are new discoveries. An interesting example of such a case is 2MASS J00302572-6236015 AB, which is a high-probability member of the Tucana-Horologium moving group, and has an estimated orbital period of less than 10 yr. Among the previously known objects is a serendipitous detection of the deuterium burning boundary circumbinary companion 2MASS J01033563-5515561 (AB)b in the z' band, thereby extending the spectral coverage for this object down to near-visible wavelengths. Based on observations collected at the European Southern Observatory, Chile (Programs 096.C-0243 and 097.C-0135).Tables 1-3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/599/A70
An Evaluation of Pixel-Based Methods for the Detection of Floating Objects on the Sea Surface
NASA Astrophysics Data System (ADS)
Borghgraef, Alexander; Barnich, Olivier; Lapierre, Fabian; Van Droogenbroeck, Marc; Philips, Wilfried; Acheroy, Marc
2010-12-01
Ship-based automatic detection of small floating objects on an agitated sea surface remains a hard problem. Our main concern is the detection of floating mines, which proved a real threat to shipping in confined waterways during the first Gulf War, but applications include salvaging, search-and-rescue operation, perimeter, or harbour defense. Detection in infrared (IR) is challenging because a rough sea is seen as a dynamic background of moving objects with size order, shape, and temperature similar to those of the floating mine. In this paper we have applied a selection of background subtraction algorithms to the problem, and we show that the recent algorithms such as ViBe and behaviour subtraction, which take into account spatial and temporal correlations within the dynamic scene, significantly outperform the more conventional parametric techniques, with only little prior assumptions about the physical properties of the scene.
NASA Astrophysics Data System (ADS)
Bagheri, Zahra M.; Cazzolato, Benjamin S.; Grainger, Steven; O'Carroll, David C.; Wiederman, Steven D.
2017-08-01
Objective. Many computer vision and robotic applications require the implementation of robust and efficient target-tracking algorithms on a moving platform. However, deployment of a real-time system is challenging, even with the computational power of modern hardware. Lightweight and low-powered flying insects, such as dragonflies, track prey or conspecifics within cluttered natural environments, illustrating an efficient biological solution to the target-tracking problem. Approach. We used our recent recordings from ‘small target motion detector’ neurons in the dragonfly brain to inspire the development of a closed-loop target detection and tracking algorithm. This model exploits facilitation, a slow build-up of response to targets which move along long, continuous trajectories, as seen in our electrophysiological data. To test performance in real-world conditions, we implemented this model on a robotic platform that uses active pursuit strategies based on insect behaviour. Main results. Our robot performs robustly in closed-loop pursuit of targets, despite a range of challenging conditions used in our experiments; low contrast targets, heavily cluttered environments and the presence of distracters. We show that the facilitation stage boosts responses to targets moving along continuous trajectories, improving contrast sensitivity and detection of small moving targets against textured backgrounds. Moreover, the temporal properties of facilitation play a useful role in handling vibration of the robotic platform. We also show that the adoption of feed-forward models which predict the sensory consequences of self-movement can significantly improve target detection during saccadic movements. Significance. Our results provide insight into the neuronal mechanisms that underlie biological target detection and selection (from a moving platform), as well as highlight the effectiveness of our bio-inspired algorithm in an artificial visual system.
2015-02-01
Right of Canada as represented by the Minister of National Defence, 2015 c© Sa Majesté la Reine (en droit du Canada), telle que représentée par le...References [1] Chiu, S. (2010), Moving target parameter estimation for RADARSAT-2 Moving Object Detection EXperiment (MODEX), International Journal of...of multiple sinusoids in noise, In Proceedings. (ICASSP ’01). 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 5
Vision System Measures Motions of Robot and External Objects
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2008-01-01
A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.
Recurrent neural network based virtual detection line
NASA Astrophysics Data System (ADS)
Kadikis, Roberts
2018-04-01
The paper proposes an efficient method for detection of moving objects in the video. The objects are detected when they cross a virtual detection line. Only the pixels of the detection line are processed, which makes the method computationally efficient. A Recurrent Neural Network processes these pixels. The machine learning approach allows one to train a model that works in different and changing outdoor conditions. Also, the same network can be trained for various detection tasks, which is demonstrated by the tests on vehicle and people counting. In addition, the paper proposes a method for semi-automatic acquisition of labeled training data. The labeling method is used to create training and testing datasets, which in turn are used to train and evaluate the accuracy and efficiency of the detection method. The method shows similar accuracy as the alternative efficient methods but provides greater adaptability and usability for different tasks.
Shallow Water Imaging Sonar System for Environmental Surveying Final Report CRADA No. TC-1130-95
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, L. C.; Rosenbaum, H.
The scope of this research is to develop a shallow water sonar system designed to detect and map the location of objects such as hazardous wastes or discarded ordnance in coastal waters. The system will use high frequency wide-bandwidth imaging sonar, mounted on a moving platform towed behind a boat, to detect and identify objects on the sea bottom. Resolved images can be obtained even if the targets are buried in an overlayer of silt. Reference 1 ( also attached) summarized the statement of work and the scope of collaboration.
2003-05-02
KENNEDY SPACE CENTER, FLA. - Workers in NASA Spacecraft Hangar AE prepare to begin further processing of the Space Infrared Telescope Facility (SIRTF), which has been returned to the hangar from the launch pad. Sections of the transportation canister used in the move are in the foreground. SIRTF will remain in the clean room until it returns to the pad in early August. One of NASA's largest infrared telescopes to be launched, SIRTF will obtain images and spectra by detecting the infrared energy, or heat, radiated by objects in space.
Software simulations of the detection of rapidly moving asteroids by a charge-coupled device
NASA Astrophysics Data System (ADS)
McMillan, R. S.; Stoll, C. P.
1982-10-01
A rendezvous of an unmanned probe to an earth-approaching asteroid has been given a high priority in the planning of interplanetary missions for the 1990s. Even without a space mission, much could be learned about the history of asteroids and comet nuclei if more information were available concerning asteroids with orbits which cross or approach the orbit of earth. It is estimated that the total number of earth-crossers accessible to ground-based survey telescopes should be approximately 1000. However, in connection with the small size and rapid angular motion expected of many of these objects an average of only one object is discovered per year. Attention is given to the development of the software necessary to distinguish such rapidly moving asteroids from stars and noise in continuously scanned CCD exposures of the night sky. Model and input parameters are considered along with detector sensitivity, aspects of minimum detectable displacement, and the point-spread function of the CCD.
NASA Astrophysics Data System (ADS)
Hosoki, Ai; Nishiyama, Michiko; Choi, Yongwoon; Watanabe, Kazuhiro
2011-05-01
In this paper, we propose discrimination method between a moving human and object by means of a hetero-core fiber smart mat sensor which induces the optical loss change in time. In addition to several advantages such as flexibility, thin size and resistance to electro-magnetic interference for a fiber optic sensor, a hetero-core fiber optic sensor is sensitive to bending action of the sensor portion and independent of temperature fluctuations. Therefore, the hetero-core fiber thin mat sensor can have a fewer sensing portions than the conventional floor pressure sensors, furthermore, can detect the wide area covering the length of strides. The experimental results for human walking tests showed that the mat sensors were reproducibly working in real-time under limiting locations the foot passed in the mat sensor. Focusing on the temporal peak numbers in the optical loss, human walking and wheeled platform moving action induced the peak numbers in the range of 1 - 3 and 5 - 7, respectively, for the 10 persons including 9 male and 1 female. As a result, we conclude that the hetero-core fiber mat sensor is capable of discriminating between the moving human and object such as a wheeled platform focusing on the peak numbers in the temporal optical loss.
Human detection in sensitive security areas through recognition of omega shapes using MACH filters
NASA Astrophysics Data System (ADS)
Rehman, Saad; Riaz, Farhan; Hassan, Ali; Liaquat, Muwahida; Young, Rupert
2015-03-01
Human detection has gained considerable importance in aggravated security scenarios over recent times. An effective security application relies strongly on detailed information regarding the scene under consideration. A larger accumulation of humans than the number of personal authorized to visit a security controlled area must be effectively detected, amicably alarmed and immediately monitored. A framework involving a novel combination of some existing techniques allows an immediate detection of an undesirable crowd in a region under observation. Frame differencing provides a clear visibility of moving objects while highlighting those objects in each frame acquired by a real time camera. Training of a correlation pattern recognition based filter on desired shapes such as elliptical representations of human faces (variants of an Omega Shape) yields correct detections. The inherent ability of correlation pattern recognition filters caters for angular rotations in the target object and renders decision regarding the existence of the number of persons exceeding an allowed figure in the monitored area.
Real-Time Optical Surveillance of LEO/MEO with Small Telescopes
NASA Astrophysics Data System (ADS)
Zimmer, P.; McGraw, J.; Ackermann, M.
J.T. McGraw and Associates, LLC operates two proof-of-concept wide-field imaging systems to test novel techniques for uncued surveillance of LEO/MEO/GEO and, in collaboration with the University of New Mexico (UNM), uses a third small telescope for rapidly queued same-orbit follow-up observations. Using our GPU-accelerated detection scheme, the proof-of-concept systems operating at sites near and within Albuquerque, NM, have detected objects fainter than V=13 at greater than 6 sigma significance. This detection approximately corresponds to a 16 cm object with albedo of 0.12 at 1000 km altitude. Dozens of objects are measured during each operational twilight period, many of which have no corresponding catalog object. The two proof-of-concept systems, separated by ~30km, work together by taking simultaneous images of the same orbital volume to constrain the orbits of detected objects using parallax measurements. These detections are followed-up by imaging photometric observations taken at UNM to confirm and further constrain the initial orbit determination and independently assess the objects and verify the quality of the derived orbits. This work continues to demonstrate that scalable optical systems designed for real-time detection of fast moving objects, which can be then handed off to other instruments capable of tracking and characterizing them, can provide valuable real-time surveillance data at LEO and beyond, which substantively informs the SSA process.
A mobile agent-based moving objects indexing algorithm in location based service
NASA Astrophysics Data System (ADS)
Fang, Zhixiang; Li, Qingquan; Xu, Hong
2006-10-01
This paper will extends the advantages of location based service, specifically using their ability to management and indexing the positions of moving object, Moreover with this objective in mind, a mobile agent-based moving objects indexing algorithm is proposed in this paper to efficiently process indexing request and acclimatize itself to limitation of location based service environment. The prominent feature of this structure is viewing moving object's behavior as the mobile agent's span, the unique mapping between the geographical position of moving objects and span point of mobile agent is built to maintain the close relationship of them, and is significant clue for mobile agent-based moving objects indexing to tracking moving objects.
Security Applications Of Computer Motion Detection
NASA Astrophysics Data System (ADS)
Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry
1987-05-01
An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.
Small Arrays for Seismic Intruder Detections: A Simulation Based Experiment
NASA Astrophysics Data System (ADS)
Pitarka, A.
2014-12-01
Seismic sensors such as geophones and fiber optic have been increasingly recognized as promising technologies for intelligence surveillance, including intruder detection and perimeter defense systems. Geophone arrays have the capability to provide cost effective intruder detection in protecting assets with large perimeters. A seismic intruder detection system uses one or multiple arrays of geophones design to record seismic signals from footsteps and ground vehicles. Using a series of real-time signal processing algorithms the system detects, classify and monitors the intruder's movement. We have carried out numerical experiments to demonstrate the capability of a seismic array to detect moving targets that generate seismic signals. The seismic source is modeled as a vertical force acting on the ground that generates continuous impulsive seismic signals with different predominant frequencies. Frequency-wave number analysis of the synthetic array data was used to demonstrate the array's capability at accurately determining intruder's movement direction. The performance of the array was also analyzed in detecting two or more objects moving at the same time. One of the drawbacks of using a single array system is its inefficiency at detecting seismic signals deflected by large underground objects. We will show simulation results of the effect of an underground concrete block at shielding the seismic signal coming from an intruder. Based on simulations we found that multiple small arrays can greatly improve the system's detection capability in the presence of underground structures. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
NASA Technical Reports Server (NTRS)
Cheng, Li-Jen (Inventor); Liu, Tsuen-Hsi (Inventor)
1991-01-01
A method and apparatus for detecting and tracking moving objects in a noise environment cluttered with fast- and slow-moving objects and other time-varying background. A pair of phase conjugate light beams carrying the same spatial information commonly cancel each other out through an image subtraction process in a phase conjugate interferometer, wherein gratings are formed in a fast photorefractive phase conjugate mirror material. In the steady state, there is no output. When the optical path of one of the two phase conjugate beams is suddenly changed, the return beam loses its phase conjugate nature and the interferometer is out of balance, resulting in an observable output. The observable output lasts until the phase conjugate nature of the beam has recovered. The observable time of the output signal is roughly equal to the formation time of the grating. If the optical path changing time is slower than the formation time, the change of optical path becomes unobservable, because the index grating can follow the change. Thus, objects traveling at speeds which result in a path changing time which is slower than the formation time are not observable and do not clutter the output image view.
Qin, Junping; Sun, Shiwen; Deng, Qingxu; Liu, Limin; Tian, Yonghong
2017-06-02
Object tracking and detection is one of the most significant research areas for wireless sensor networks. Existing indoor trajectory tracking schemes in wireless sensor networks are based on continuous localization and moving object data mining. Indoor trajectory tracking based on the received signal strength indicator ( RSSI ) has received increased attention because it has low cost and requires no special infrastructure. However, RSSI tracking introduces uncertainty because of the inaccuracies of measurement instruments and the irregularities (unstable, multipath, diffraction) of wireless signal transmissions in indoor environments. Heuristic information includes some key factors for trajectory tracking procedures. This paper proposes a novel trajectory tracking scheme based on Delaunay triangulation and heuristic information (TTDH). In this scheme, the entire field is divided into a series of triangular regions. The common side of adjacent triangular regions is regarded as a regional boundary. Our scheme detects heuristic information related to a moving object's trajectory, including boundaries and triangular regions. Then, the trajectory is formed by means of a dynamic time-warping position-fingerprint-matching algorithm with heuristic information constraints. Field experiments show that the average error distance of our scheme is less than 1.5 m, and that error does not accumulate among the regions.
Location detection and tracking of moving targets by a 2D IR-UWB radar system.
Nguyen, Van-Han; Pyun, Jae-Young
2015-03-19
In indoor environments, the Global Positioning System (GPS) and long-range tracking radar systems are not optimal, because of signal propagation limitations in the indoor environment. In recent years, the use of ultra-wide band (UWB) technology has become a possible solution for object detection, localization and tracking in indoor environments, because of its high range resolution, compact size and low cost. This paper presents improved target detection and tracking techniques for moving objects with impulse-radio UWB (IR-UWB) radar in a short-range indoor area. This is achieved through signal-processing steps, such as clutter reduction, target detection, target localization and tracking. In this paper, we introduce a new combination consisting of our proposed signal-processing procedures. In the clutter-reduction step, a filtering method that uses a Kalman filter (KF) is proposed. Then, in the target detection step, a modification of the conventional CLEAN algorithm which is used to estimate the impulse response from observation region is applied for the advanced elimination of false alarms. Then, the output is fed into the target localization and tracking step, in which the target location and trajectory are determined and tracked by using unscented KF in two-dimensional coordinates. In each step, the proposed methods are compared to conventional methods to demonstrate the differences in performance. The experiments are carried out using actual IR-UWB radar under different scenarios. The results verify that the proposed methods can improve the probability and efficiency of target detection and tracking.
Magnetogate: using an iPhone magnetometer for measuring kinematic variables
NASA Astrophysics Data System (ADS)
Kağan Temiz, Burak; Yavuz, Ahmet
2016-01-01
This paper presents a method to measure the movement of an object from specific locations on a straight line using an iPhone’s magnetometer. In this method, called ‘magnetogate’, an iPhone is placed on a moving object (in this case a toy car) and small neodymium magnets are arranged at equal intervals on one side of a straight line. The magnetometer sensor of the iPhone is switched on and then the car starts moving. The iPhone’s magnetometer is stimulated throughout its movement along a straight line. A ‘sensor Kinetics’ application on the iPhone saves the magnetic stimulations and produces a graph of the changing magnetic field near the iPhone. At the end of motion, data from the magnetometer is interpreted and peaks on the graph are detected. Thus, position-time changes can be analysed and comments about the motion of the object can be made. The position, velocity and acceleration of the object can be easily measured with this method.
Primary results from the Pan-STARRS-1 Outer Solar System Key Project
NASA Astrophysics Data System (ADS)
Holman, Matthew J.; Chen, Ying-Tung; Lackner, Michael; Payne, Matthew John; Lin, Hsing-Wen; Cristopher Fraser, Wesley; Lacerda, Pedro; Pan-STARRS 1 Science Consortium
2016-10-01
We have completed a search for slow moving bodies in the data obtained by the Pan-STARRS-1 (PS1) Science Consortium from 2010 to 2014. The data set covers the full sky north of -30 degrees declination, in the PS1 g, r, i, z, y, and w (g+r+i) filters. Our novel distance-based search is effective at detecting and linking very slow moving objects with sparsely sampled observations, even if observations are widely separated in RA, Dec and time, which is relevant to the future LSST solar system searches. In particular, our search is sensitive to objects at heliocentric distances of 25-2000 AU with magnitudes brighter than approximately r=22.5, without limits on the inclination of the object. We recover hundreds of known TNOs and Centaurs and discover hundreds of new objects, measuring phase and color information for many of them. Other highlights include the discovery of a second retrograde TNO, a number of Neptune Trojans, and large numbers of distant resonant TNOs.
Specialization of Perceptual Processes.
1994-09-01
population rose and fell, furniture was rearranged, a small mountain range was built in part of the lab (really), carpets were shampooed , and oce lighting...common task is the tracking of moving objects. Coombs [22] implemented a system 44 for xating and tracking objects using a stereo eye/ head system...be a person (person?). Finally, a motion unit is used to detect foot gestures. A pair of nod-of-the- head detectors were implemented and tested, but
Wanetick, S.
1962-03-01
ABS>ure the change in velocity of a moving object. The transducer includes a radioactive source having a collimated beam of radioactive particles, a shield which can block the passage of the radioactive beam, and a scintillation detector to measure the number of radioactive particles in the beam which are not blocked by the shield. The shield is operatively placed across the radioactive beam so that any motion normal to the beam will cause the shield to move in the opposite direction thereby allowing more radioactive particles to reach the detector. The number of particles detected indicates the acceleration. (AEC)
Mitić, Jelena; Anhut, Tiemo; Meier, Matthias; Ducros, Mathieu; Serov, Alexander; Lasser, Theo
2003-05-01
Optical sectioning in wide-field microscopy is achieved by illumination of the object with a continuously moving single-spatial-frequency pattern and detecting the image with a smart pixel detector array. This detector performs an on-chip electronic signal processing that extracts the optically sectioned image. The optically sectioned image is directly observed in real time without any additional postprocessing.
Camouflaging moving objects: crypsis and masquerade.
Hall, Joanna R; Baddeley, Roland; Scott-Samuel, Nicholas E; Shohet, Adam J; Cuthill, Innes C
2017-01-01
Motion is generally assumed to "break" camouflage. However, although camouflage cannot conceal a group of moving animals, it may impair a predator's ability to single one out for attack, even if that discrimination is not based on a color difference. Here, we use a computer-based task in which humans had to detect the odd one out among moving objects, with "oddity" based on shape. All objects were either patterned or plain, and either matched the background or not. We show that there are advantages of matching both group-mates and the background. However, when patterned objects are on a plain background (i.e., no background matching), the advantage of being among similarly patterned distractors is only realized when the group size is larger (10 compared to 5). In a second experiment, we present a paradigm for testing how coloration interferes with target-distractor discrimination, based on an adaptive staircase procedure for establishing the threshold. We show that when the predator only has a short time for decision-making, displaying a similar pattern to the distractors and the background affords protection even when the difference in shape between target and distractors is large. We conclude that, even though motion breaks camouflage, being camouflaged could help group-living animals reduce the risk of being singled out for attack by predators.
Camouflaging moving objects: crypsis and masquerade
Hall, Joanna R; Baddeley, Roland; Scott-Samuel, Nicholas E; Shohet, Adam J; Cuthill, Innes C
2017-01-01
Abstract Motion is generally assumed to “break” camouflage. However, although camouflage cannot conceal a group of moving animals, it may impair a predator’s ability to single one out for attack, even if that discrimination is not based on a color difference. Here, we use a computer-based task in which humans had to detect the odd one out among moving objects, with “oddity” based on shape. All objects were either patterned or plain, and either matched the background or not. We show that there are advantages of matching both group-mates and the background. However, when patterned objects are on a plain background (i.e., no background matching), the advantage of being among similarly patterned distractors is only realized when the group size is larger (10 compared to 5). In a second experiment, we present a paradigm for testing how coloration interferes with target-distractor discrimination, based on an adaptive staircase procedure for establishing the threshold. We show that when the predator only has a short time for decision-making, displaying a similar pattern to the distractors and the background affords protection even when the difference in shape between target and distractors is large. We conclude that, even though motion breaks camouflage, being camouflaged could help group-living animals reduce the risk of being singled out for attack by predators. PMID:29622927
Object Detection Applied to Indoor Environments for Mobile Robot Navigation.
Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón
2016-07-28
To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests.
Object Detection Applied to Indoor Environments for Mobile Robot Navigation
Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón
2016-01-01
To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests. PMID:27483264
HUBBLE DETECTION OF COMET NUCLEUS AT FRINGE OF SOLAR SYSTEM
NASA Technical Reports Server (NTRS)
2002-01-01
This is sample data from NASA's Hubble Space Telescope that illustrates the detection of comets in the Kuiper Belt, a region of space beyond the orbit of the planet Neptune. This pair of images, taken with the Wide Field Planetary Camera 2 (WFPC2), shows one of the candidate Kuiper Belt objects found with Hubble. Believed to be an icy comet nucleus several miles across, the object is so distant and faint that Hubble's search is the equivalent of finding the proverbial needle-in-haystack. Each photo is a 5-hour exposure of a piece of sky carefully selected such that it is nearly devoid of background stars and galaxies that could mask the elusive comet. The left image, taken on August 22, 1994, shows the candidate comet object (inside circle) embedded in the background. The right picture, take of the same region one hour forty-five minutes later shows the object has apparently moved in the predicted direction and rate of motion for a kuiper belt member. The dotted line on the images is a possible orbit that this Kuiper belt comet is following. A star (lower right corner) and a galaxy (upper right corner) provide a static background reference. In addition, other objects in the picture have not moved during this time, indicating they are outside our solar system. Through this search technique astronomers have identified 29 candidate comet nuclei belonging to an estimated population of 200 million particles orbiting the edge of our solar system. The Kupier Belt was theorized 40 years ago, and its larger members detected several years ago. However, Hubble has found the underlying population of normal comet-sized bodies. Credit: A. Cochran (University of Texas) and NASA
Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D.
2010-01-01
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors. PMID:22294896
Fors, Octavi; Núñez, Jorge; Otazu, Xavier; Prades, Albert; Cardinal, Robert D
2010-01-01
In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.
Moving Object Detection Using a Parallax Shift Vector Algorithm
NASA Astrophysics Data System (ADS)
Gural, Peter S.; Otto, Paul R.; Tedesco, Edward F.
2018-07-01
There are various algorithms currently in use to detect asteroids from ground-based observatories, but they are generally restricted to linear or mildly curved movement of the target object across the field of view. Space-based sensors in high inclination, low Earth orbits can induce significant parallax in a collected sequence of images, especially for objects at the typical distances of asteroids in the inner solar system. This results in a highly nonlinear motion pattern of the asteroid across the sensor, which requires a more sophisticated search pattern for detection processing. Both the classical pattern matching used in ground-based asteroid search and the more sensitive matched filtering and synthetic tracking techniques, can be adapted to account for highly complex parallax motion. A new shift vector generation methodology is discussed along with its impacts on commonly used detection algorithms, processing load, and responsiveness to asteroid track reporting. The matched filter, template generator, and pattern matcher source code for the software described herein are available via GitHub.
Vibration Measurement Method of a String in Transversal Motion by Using a PSD
Yang, Che-Hua; Wu, Tai-Chieh
2017-01-01
A position sensitive detector (PSD) is frequently used for the measurement of a one-dimensional position along a line or a two-dimensional position on a plane, but is more often used for measuring static or quasi-static positions. Along with its quick response when measuring short time-spans in the micro-second realm, a PSD is also capable of detecting the dynamic positions of moving objects. In this paper, theoretical modeling and experiments are conducted to explore the frequency characteristics of a vibrating string while moving transversely across a one-dimensional PSD. The theoretical predictions are supported by the experiments. When the string vibrates at its natural frequency while moving transversely, the PSD will detect two frequencies near this natural frequency; one frequency is higher than the natural frequency and the other is lower. Deviations in these two frequencies, which differ from the string’s natural frequency, increase while the speed of motion increases. PMID:28714915
An Efficient and Robust Moving Shadow Removal Algorithm and Its Applications in ITS
NASA Astrophysics Data System (ADS)
Lin, Chin-Teng; Yang, Chien-Ting; Shou, Yu-Wen; Shen, Tzu-Kuei
2010-12-01
We propose an efficient algorithm for removing shadows of moving vehicles caused by non-uniform distributions of light reflections in the daytime. This paper presents a brand-new and complete structure in feature combination as well as analysis for orientating and labeling moving shadows so as to extract the defined objects in foregrounds more easily in each snapshot of the original files of videos which are acquired in the real traffic situations. Moreover, we make use of Gaussian Mixture Model (GMM) for background removal and detection of moving shadows in our tested images, and define two indices for characterizing non-shadowed regions where one indicates the characteristics of lines and the other index can be characterized by the information in gray scales of images which helps us to build a newly defined set of darkening ratios (modified darkening factors) based on Gaussian models. To prove the effectiveness of our moving shadow algorithm, we carry it out with a practical application of traffic flow detection in ITS (Intelligent Transportation System)—vehicle counting. Our algorithm shows the faster processing speed, 13.84 ms/frame, and can improve the accuracy rate in 4% ~ 10% for our three tested videos in the experimental results of vehicle counting.
Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection
NASA Astrophysics Data System (ADS)
Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf
2016-10-01
Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.
Shape-based human detection for threat assessment
NASA Astrophysics Data System (ADS)
Lee, Dah-Jye; Zhan, Pengcheng; Thomas, Aaron; Schoenberger, Robert B.
2004-07-01
Detection of intrusions for early threat assessment requires the capability of distinguishing whether the intrusion is a human, an animal, or other objects. Most low-cost security systems use simple electronic motion detection sensors to monitor motion or the location of objects within the perimeter. Although cost effective, these systems suffer from high rates of false alarm, especially when monitoring open environments. Any moving objects including animals can falsely trigger the security system. Other security systems that utilize video equipment require human interpretation of the scene in order to make real-time threat assessment. Shape-based human detection technique has been developed for accurate early threat assessments for open and remote environment. Potential threats are isolated from the static background scene using differential motion analysis and contours of the intruding objects are extracted for shape analysis. Contour points are simplified by removing redundant points connecting short and straight line segments and preserving only those with shape significance. Contours are represented in tangent space for comparison with shapes stored in database. Power cepstrum technique has been developed to search for the best matched contour in database and to distinguish a human from other objects from different viewing angles and distances.
Demonstration of Uncued Optical Surveillance of LEO
NASA Astrophysics Data System (ADS)
Zimmer, P.; Ackermann, M.; McGraw, J.
2014-09-01
J.T. McGraw and Associates, LLC, in collaboration with the University of New Mexico (UNM), has built and is operating two proof-of-concept wide-field imaging systems to test novel techniques for uncued surveillance of LEO. The imaging systems are built from off-the-shelf optics and detectors resulting in a 350mm aperture and a 6 square degree field of view. For streak detection, field of view is of critical importance because the maximum exposure time on the object is limited by its crossing time and measurements of apparent angular motion are better constrained with longer streaks. The current match of the detector to the optical system is optimized for detection of objects at altitudes above 450km, which for a circular orbit, corresponds to apparent motions of approximately 1 deg./sec. Using our GPU-accelerated detection scheme, the proof-of-concept systems have detected objects fainter than V=12.3, which approximately corresponds to a 24 cm object at 1000 km altitude at better than 6 sigma significance, from sites near and within Albuquerque, NM. This work demonstrates scalable optical systems designed for near real time detection of fast moving objects, which can be then handed off to other instruments capable of tracking and characterizing them. The two proof-of-concept systems, separated by ~30km, work together by taking simultaneous images of the same orbital volume to constrain the orbits of detected objects using parallax measurements. These detections are followed-up by photometric observations taken at UNM to independently assess the objects and the quality of the derived orbits. We believe this demonstrates the potential of small telescope arrays for detecting and cataloguing heretofore unknown LEO objects.
NASA Astrophysics Data System (ADS)
Manger, Daniel; Metzler, Jürgen
2014-03-01
Military Operations in Urban Terrain (MOUT) require the capability to perceive and to analyze the situation around a patrol in order to recognize potential threats. A permanent monitoring of the surrounding area is essential in order to appropriately react to the given situation, where one relevant task is the detection of objects that can pose a threat. Especially the robust detection of persons is important, as in MOUT scenarios threats usually arise from persons. This task can be supported by image processing systems. However, depending on the scenario, person detection in MOUT can be challenging, e.g. persons are often occluded in complex outdoor scenes and the person detection also suffers from low image resolution. Furthermore, there are several requirements on person detection systems for MOUT such as the detection of non-moving persons, as they can be a part of an ambush. Existing detectors therefore have to operate on single images with low thresholds for detection in order to not miss any person. This, in turn, leads to a comparatively high number of false positive detections which renders an automatic vision-based threat detection system ineffective. In this paper, a hybrid detection approach is presented. A combination of a discriminative and a generative model is examined. The objective is to increase the accuracy of existing detectors by integrating a separate hypotheses confirmation and rejection step which is built by a discriminative and generative model. This enables the overall detection system to make use of both the discriminative power and the capability to detect partly hidden objects with the models. The approach is evaluated on benchmark data sets generated from real-world image sequences captured during MOUT exercises. The extension shows a significant improvement of the false positive detection rate.
Modulation frequency as a cue for auditory speed perception.
Senna, Irene; Parise, Cesare V; Ernst, Marc O
2017-07-12
Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).
Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR
NASA Astrophysics Data System (ADS)
Sroka, Adam; Chan, Susan; Warburton, Ryan; Gariepy, Genevieve; Henderson, Robert; Leach, Jonathan; Faccio, Daniele; Lee, Stephen T.
2016-05-01
The ability to detect motion and to track a moving object that is hidden around a corner or behind a wall provides a crucial advantage when physically going around the obstacle is impossible or dangerous. One recently demonstrated approach to achieving this goal makes use of non-line-of-sight picosecond pulse laser ranging. This approach has recently become interesting due to the availability of single-photon avalanche diode (SPAD) receivers with picosecond time resolution. We present a time-resolved non-sequential ray-tracing model and its application to indirect line-of-sight detection of moving targets. The model makes use of the Zemax optical design programme's capabilities in stray light analysis where it traces large numbers of rays through multiple random scattering events in a 3D non-sequential environment. Our model then reconstructs the generated multi-segment ray paths and adds temporal analysis. Validation of this model against experimental results is shown. We then exercise the model to explore the limits placed on system design by available laser sources and detectors. In particular we detail the requirements on the laser's pulse energy, duration and repetition rate, and on the receiver's temporal response and sensitivity. These are discussed in terms of the resulting implications for achievable range, resolution and measurement time while retaining eye-safety with this technique. Finally, the model is used to examine potential extensions to the experimental system that may allow for increased localisation of the position of the detected moving object, such as the inclusion of multiple detectors and/or multiple emitters.
Saiki, Jun; Holcombe, Alex O
2012-03-06
Sudden change of every object in a display is typically conspicuous. We find however that in the presence of a secondary task, with a display of moving dots, it can be difficult to detect a sudden change in color of all the dots. A field of 200 dots, half red and half green, half moving rightward and half moving leftward, gave the appearance of two surfaces. When all 200 dots simultaneously switched color between red and green, performance in detecting the switch was very poor. A key display characteristic was that the color proportions on each surface (summary statistics) were not affected by the color switch. When the color switch is accompanied by a change in these summary statistics, people perform well in detecting the switch, suggesting that the secondary task does not disrupt the availability of this statistical information. These findings suggest that when the change is missed, the old and new colors were represented, but the color-location pattern (binding of colors to locations) was not represented or not compared. Even after extended viewing, changes to the individual color-location pattern are not available, suggesting that the feeling of seeing these details is misleading.
Uncued Low SNR Detection with Likelihood from Image Multi Bernoulli Filter
NASA Astrophysics Data System (ADS)
Murphy, T.; Holzinger, M.
2016-09-01
Both SSA and SDA necessitate uncued, partially informed detection and orbit determination efforts for small space objects which often produce only low strength electro-optical signatures. General frame to frame detection and tracking of objects includes methods such as moving target indicator, multiple hypothesis testing, direct track-before-detect methods, and random finite set based multiobject tracking. This paper will apply the multi-Bernoilli filter to low signal-to-noise ratio (SNR), uncued detection of space objects for space domain awareness applications. The primary novel innovation in this paper is a detailed analysis of the existing state-of-the-art likelihood functions and a likelihood function, based on a binary hypothesis, previously proposed by the authors. The algorithm is tested on electro-optical imagery obtained from a variety of sensors at Georgia Tech, including the GT-SORT 0.5m Raven-class telescope, and a twenty degree field of view high frame rate CMOS sensor. In particular, a data set of an extended pass of the Hitomi Astro-H satellite approximately 3 days after loss of communication and potential break up is examined.
TrackTable Trajectory Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, Andrew T.
Tracktable is designed for analysis and rendering of the trajectories of moving objects such as planes, trains, automobiles and ships. Its purpose is to operate on large sets of trajectories (millions) to help a user detect, analyze and display patterns. It will also be used to disseminate trajectory research results from Sandia's PANTHER Grand Challenge LDRD.
NASA Astrophysics Data System (ADS)
Torteeka, Peerapong; Gao, Peng-Qi; Shen, Ming; Guo, Xiao-Zhang; Yang, Da-Tao; Yu, Huan-Huan; Zhou, Wei-Ping; Zhao, You
2017-02-01
Although tracking with a passive optical telescope is a powerful technique for space debris observation, it is limited by its sensitivity to dynamic background noise. Traditionally, in the field of astronomy, static background subtraction based on a median image technique has been used to extract moving space objects prior to the tracking operation, as this is computationally efficient. The main disadvantage of this technique is that it is not robust to variable illumination conditions. In this article, we propose an approach for tracking small and dim space debris in the context of a dynamic background via one of the optical telescopes that is part of the space surveillance network project, named the Asia-Pacific ground-based Optical Space Observation System or APOSOS. The approach combines a fuzzy running Gaussian average for robust moving-object extraction with dim-target tracking using a particle-filter-based track-before-detect method. The performance of the proposed algorithm is experimentally evaluated, and the results show that the scheme achieves a satisfactory level of accuracy for space debris tracking.
Moving Object Detection in Heterogeneous Conditions in Embedded Systems.
Garbo, Alessandro; Quer, Stefano
2017-07-01
This paper presents a system for moving object exposure, focusing on pedestrian detection, in external, unfriendly, and heterogeneous environments. The system manipulates and accurately merges information coming from subsequent video frames, making small computational efforts in each single frame. Its main characterizing feature is to combine several well-known movement detection and tracking techniques, and to orchestrate them in a smart way to obtain good results in diversified scenarios. It uses dynamically adjusted thresholds to characterize different regions of interest, and it also adopts techniques to efficiently track movements, and detect and correct false positives. Accuracy and reliability mainly depend on the overall receipt, i.e., on how the software system is designed and implemented, on how the different algorithmic phases communicate information and collaborate with each other, and on how concurrency is organized. The application is specifically designed to work with inexpensive hardware devices, such as off-the-shelf video cameras and small embedded computational units, eventually forming an intelligent urban grid. As a matter of fact, the major contribution of the paper is the presentation of a tool for real-time applications in embedded devices with finite computational (time and memory) resources. We run experimental results on several video sequences (both home-made and publicly available), showing the robustness and accuracy of the overall detection strategy. Comparisons with state-of-the-art strategies show that our application has similar tracking accuracy but much higher frame-per-second rates.
Moving Object Detection in Heterogeneous Conditions in Embedded Systems
Garbo, Alessandro
2017-01-01
This paper presents a system for moving object exposure, focusing on pedestrian detection, in external, unfriendly, and heterogeneous environments. The system manipulates and accurately merges information coming from subsequent video frames, making small computational efforts in each single frame. Its main characterizing feature is to combine several well-known movement detection and tracking techniques, and to orchestrate them in a smart way to obtain good results in diversified scenarios. It uses dynamically adjusted thresholds to characterize different regions of interest, and it also adopts techniques to efficiently track movements, and detect and correct false positives. Accuracy and reliability mainly depend on the overall receipt, i.e., on how the software system is designed and implemented, on how the different algorithmic phases communicate information and collaborate with each other, and on how concurrency is organized. The application is specifically designed to work with inexpensive hardware devices, such as off-the-shelf video cameras and small embedded computational units, eventually forming an intelligent urban grid. As a matter of fact, the major contribution of the paper is the presentation of a tool for real-time applications in embedded devices with finite computational (time and memory) resources. We run experimental results on several video sequences (both home-made and publicly available), showing the robustness and accuracy of the overall detection strategy. Comparisons with state-of-the-art strategies show that our application has similar tracking accuracy but much higher frame-per-second rates. PMID:28671582
Radar based autonomous sensor module
NASA Astrophysics Data System (ADS)
Styles, Tim
2016-10-01
Most surveillance systems combine camera sensors with other detection sensors that trigger an alert to a human operator when an object is detected. The detection sensors typically require careful installation and configuration for each application and there is a significant burden on the operator to react to each alert by viewing camera video feeds. A demonstration system known as Sensing for Asset Protection with Integrated Electronic Networked Technology (SAPIENT) has been developed to address these issues using Autonomous Sensor Modules (ASM) and a central High Level Decision Making Module (HLDMM) that can fuse the detections from multiple sensors. This paper describes the 24 GHz radar based ASM, which provides an all-weather, low power and license exempt solution to the problem of wide area surveillance. The radar module autonomously configures itself in response to tasks provided by the HLDMM, steering the transmit beam and setting range resolution and power levels for optimum performance. The results show the detection and classification performance for pedestrians and vehicles in an area of interest, which can be modified by the HLDMM without physical adjustment. The module uses range-Doppler processing for reliable detection of moving objects and combines Radar Cross Section and micro-Doppler characteristics for object classification. Objects are classified as pedestrian or vehicle, with vehicle sub classes based on size. Detections are reported only if the object is detected in a task coverage area and it is classified as an object of interest. The system was shown in a perimeter protection scenario using multiple radar ASMs, laser scanners, thermal cameras and visible band cameras. This combination of sensors enabled the HLDMM to generate reliable alerts with improved discrimination of objects and behaviours of interest.
NASA Technical Reports Server (NTRS)
Cheng, Li-Jen (Inventor); Liu, Tsuen-Hsi (Inventor)
1990-01-01
A method and apparatus is disclosed for detecting and tracking moving objects in a noise environment cluttered with fast-and slow-moving objects and other time-varying background. A pair of phase conjugate light beams carrying the same spatial information commonly cancel each other out through an image subtraction process in a phase conjugate interferometer, wherein gratings are formed in a fast photo-refractive phase conjugate mirror material. In the steady state, there is no output. When the optical path of one of the two phase conjugate beams is suddenly changed, the return beam loses its phase conjugate nature and the inter-ferometer is out of balance, resulting in an observable output. The observable output lasts until the phase conjugate nature of the beam has recovered. The observable time of the output signal is roughly equal to the formation time of the grating. If the optical path changing time is slower than the formation time, the change of optical path becomes unobservable, because the index grating can follow the change. Thus, objects traveling at speeds which result in a path changing time which is slower than the formation time are not observable and do not clutter the output image view.
Heterogeneous CPU-GPU moving targets detection for UAV video
NASA Astrophysics Data System (ADS)
Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan
2017-07-01
Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.
NASA Astrophysics Data System (ADS)
Ciurapiński, Wieslaw; Dulski, Rafal; Kastek, Mariusz; Szustakowski, Mieczyslaw; Bieszczad, Grzegorz; Życzkowski, Marek; Trzaskawka, Piotr; Piszczek, Marek
2009-09-01
The paper presents the concept of multispectral protection system for perimeter protection for stationary and moving objects. The system consists of active ground radar, thermal and visible cameras. The radar allows the system to locate potential intruders and to control an observation area for system cameras. The multisensor construction of the system ensures significant improvement of detection probability of intruder and reduction of false alarms. A final decision from system is worked out using image data. The method of data fusion used in the system has been presented. The system is working under control of FLIR Nexus system. The Nexus offers complete technology and components to create network-based, high-end integrated systems for security and surveillance applications. Based on unique "plug and play" architecture, system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provides high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering.
NASA Astrophysics Data System (ADS)
Kortenkamp, David; Huber, Marcus J.; Congdon, Clare B.; Huffman, Scott B.; Bidlack, Clint R.; Cohen, Charles J.; Koss, Frank V.; Raschke, Ulrich; Weymouth, Terry E.
1993-05-01
This paper describes the design and implementation of an integrated system for combining obstacle avoidance, path planning, landmark detection and position triangulation. Such an integrated system allows the robot to move from place to place in an environment, avoiding obstacles and planning its way out of traps, while maintaining its position and orientation using distinctive landmarks. The task the robot performs is to search a 22 m X 22 m arena for 10 distinctive objects, visiting each object in turn. This same task was recently performed by a dozen different robots at a competition in which the robot described in this paper finished first.
Effects of Implied Motion and Facing Direction on Positional Preferences in Single-Object Pictures.
Palmer, Stephen E; Langlois, Thomas A
2017-07-01
Palmer, Gardner, and Wickens studied aesthetic preferences for pictures of single objects and found a strong inward bias: Right-facing objects were preferred left-of-center and left-facing objects right-of-center. They found no effect of object motion (people and cars showed the same inward bias as chairs and teapots), but the objects were not depicted as moving. Here we measured analogous inward biases with objects depicted as moving with an implied direction and speed by having participants drag-and-drop target objects into the most aesthetically pleasing position. In Experiment 1, human figures were shown diving or falling while moving forward or backward. Aesthetic biases were evident for both inward-facing and inward-moving figures, but the motion-based bias dominated so strongly that backward divers or fallers were preferred moving inward but facing outward. Experiment 2 investigated implied speed effects using images of humans, horses, and cars moving at different speeds (e.g., standing, walking, trotting, and galloping horses). Inward motion or facing biases were again present, and differences in their magnitude due to speed were evident. Unexpectedly, faster moving objects were generally preferred closer to frame center than slower moving objects. These results are discussed in terms of the combined effects of prospective, future-oriented biases, and retrospective, past-oriented biases.
Vehicle and cargo container inspection system for drugs
NASA Astrophysics Data System (ADS)
Verbinski, Victor V.; Orphan, Victor J.
1999-06-01
A vehicle and cargo container inspection system has been developed which uses gamma-ray radiography to produce digital images useful for detection of drugs and other contraband. The system is comprised of a 1 Ci Cs137 gamma-ray source collimated into a fan beam which is aligned with a linear array of NaI gamma-ray detectors located on the opposite side of the container. The NaI detectors are operated in the pulse-counting mode. A digital image of the vehicle or container is obtained by moving the aligned source and detector array relative to the object. Systems have been demonstrated in which the object is stationary (source and detector array move on parallel tracks) and in which the object moves past a stationary source and detector array. Scanning speeds of ˜30 cm/s with a pixel size (at the object) of ˜1 cm have been achieved. Faster scanning speeds of ˜2 m/s have been demonstrated on railcars with more modest spatial resolution (4 cm pixels). Digital radiographic images are generated from the detector count rates. These images, recorded on a PC-based data acquisition and display system, are shown from several applications: 1) inspection of trucks and containers at a border crossing, 2) inspection of railcars at a border crossing, 3) inspection of outbound cargo containers for stolen automobiles, and 4) inspection of trucks and cars for terrorist bombs.
Mutual Comparative Filtering for Change Detection in Videos with Unstable Illumination Conditions
NASA Astrophysics Data System (ADS)
Sidyakin, Sergey V.; Vishnyakov, Boris V.; Vizilter, Yuri V.; Roslov, Nikolay I.
2016-06-01
In this paper we propose a new approach for change detection and moving objects detection in videos with unstable, abrupt illumination changes. This approach is based on mutual comparative filters and background normalization. We give the definitions of mutual comparative filters and outline their strong advantage for change detection purposes. Presented approach allows us to deal with changing illumination conditions in a simple and efficient way and does not have drawbacks, which exist in models that assume different color transformation laws. The proposed procedure can be used to improve a number of background modelling methods, which are not specifically designed to work under illumination changes.
Astrometry with A-Track Using Gaia DR1 Catalogue
NASA Astrophysics Data System (ADS)
Kılıç, Yücel; Erece, Orhan; Kaplan, Murat
2018-04-01
In this work, we built all sky index files from Gaia DR1 catalogue for the high-precision astrometric field solution and the precise WCS coordinates of the moving objects. For this, we used build-astrometry-index program as a part of astrometry.net code suit. Additionally, we added astrometry.net's WCS solution tool to our previously developed software which is a fast and robust pipeline for detecting moving objects such as asteroids and comets in sequential FITS images, called A-Track. Moreover, MPC module was added to A-Track. This module is linked to an asteroid database to name the found objects and prepare the MPC file to report the results. After these innovations, we tested a new version of the A-Track code on photometrical data taken by the SI-1100 CCD with 1-meter telescope at TÜBİTAK National Observatory, Antalya. The pipeline can be used to analyse large data archives or daily sequential data. The code is hosted on GitHub under the GNU GPL v3 license.
Searching for Solar System Wide Binaries with Pan-STARRS-1
NASA Astrophysics Data System (ADS)
Holman, Matthew J.; Protopapas, P.; Tholen, D. J.
2007-10-01
Roughly 60% of the observing time of the Pan-STARRS-1 (PS1) telescope will be dedicated to a "3pi steradian" survey with an observing cadence that is designed for the detection of near-Earth asteroids and slow-moving solar system bodies. Over this course of its 3.5 year cience mission, this unprecedented survey will discover nearly every asteroid, Trojan, Centaur, long-period comet, short-period comet, and trans-neptunian object (TNO) brighter than magnitude R=23. This census will be used to address a large number of questions regarding the physical and dynamical properties of the various small body populations of the solar system. Roughly 1-2% of TNOs are wide binaries with companions at separations greater than 1 arcsec and brightness differences less than 2 magnitudes (Kern & Elliot 2006; Noll et al 2007). These can be readily detected by PS1; we will carry out such a search with PS1 data. To do so, we will modify the Pan-STARRS Moving Object Processing System (MOPS) such that it will associate the components of resolved or marginally resolved binaries, link such pairs of detections obtained at different epochs, and the estimate the relative orbit of the binary. We will also determine the efficiency with which such binaries are detected as a function of the binary's relative orbit and the relative magnitudes of the components. Based on an estimated 7000 TNOs that PS1 will discover, we anticipate finding 70-140 wide binaries. The PS1 data, 60 epochs over three years, is naturally suited to determining the orbits of these objects. Our search will accurately determine the binary fraction for a variety of subclasses of TNOs.
Improved Object Detection Using a Robotic Sensing Antenna with Vibration Damping Control
Feliu-Batlle, Vicente; Feliu-Talegon, Daniel; Castillo-Berrio, Claudia Fernanda
2017-01-01
Some insects or mammals use antennae or whiskers to detect by the sense of touch obstacles or recognize objects in environments in which other senses like vision cannot work. Artificial flexible antennae can be used in robotics to mimic this sense of touch in these recognition tasks. We have designed and built a two-degree of freedom (2DOF) flexible antenna sensor device to perform robot navigation tasks. This device is composed of a flexible beam, two servomotors that drive the beam and a load cell sensor that detects the contact of the beam with an object. It is found that the efficiency of such a device strongly depends on the speed and accuracy achieved by the antenna positioning system. These issues are severely impaired by the vibrations that appear in the antenna during its movement. However, these antennae are usually moved without taking care of these undesired vibrations. This article proposes a new closed-loop control schema that cancels vibrations and improves the free movements of the antenna. Moreover, algorithms to estimate the 3D beam position and the instant and point of contact with an object are proposed. Experiments are reported that illustrate the efficiency of these proposed algorithms and the improvements achieved in object detection tasks using a control system that cancels beam vibrations. PMID:28406449
Improved Object Detection Using a Robotic Sensing Antenna with Vibration Damping Control.
Feliu-Batlle, Vicente; Feliu-Talegon, Daniel; Castillo-Berrio, Claudia Fernanda
2017-04-13
Some insects or mammals use antennae or whiskers to detect by the sense of touch obstacles or recognize objects in environments in which other senses like vision cannot work. Artificial flexible antennae can be used in robotics to mimic this sense of touch in these recognition tasks. We have designed and built a two-degree of freedom (2DOF) flexible antenna sensor device to perform robot navigation tasks. This device is composed of a flexible beam, two servomotors that drive the beam and a load cell sensor that detects the contact of the beam with an object. It is found that the efficiency of such a device strongly depends on the speed and accuracy achieved by the antenna positioning system. These issues are severely impaired by the vibrations that appear in the antenna during its movement. However, these antennae are usually moved without taking care of these undesired vibrations. This article proposes a new closed-loop control schema that cancels vibrations and improves the free movements of the antenna. Moreover, algorithms to estimate the 3D beam position and the instant and point of contact with an object are proposed. Experiments are reported that illustrate the efficiency of these proposed algorithms and the improvements achieved in object detection tasks using a control system that cancels beam vibrations.
Operator-coached machine vision for space telerobotics
NASA Technical Reports Server (NTRS)
Bon, Bruce; Wilcox, Brian; Litwin, Todd; Gennery, Donald B.
1991-01-01
A prototype system for interactive object modeling has been developed and tested. The goal of this effort has been to create a system which would demonstrate the feasibility of high interactive operator-coached machine vision in a realistic task environment, and to provide a testbed for experimentation with various modes of operator interaction. The purpose for such a system is to use human perception where machine vision is difficult, i.e., to segment the scene into objects and to designate their features, and to use machine vision to overcome limitations of human perception, i.e., for accurate measurement of object geometry. The system captures and displays video images from a number of cameras, allows the operator to designate a polyhedral object one edge at a time by moving a 3-D cursor within these images, performs a least-squares fit of the designated edges to edge data detected with a modified Sobel operator, and combines the edges thus detected to form a wire-frame object model that matches the Sobel data.
NASA Astrophysics Data System (ADS)
Zhu, Zhen; Vana, Sudha; Bhattacharya, Sumit; Uijt de Haag, Maarten
2009-05-01
This paper discusses the integration of Forward-looking Infrared (FLIR) and traffic information from, for example, the Automatic Dependent Surveillance - Broadcast (ADS-B) or the Traffic Information Service-Broadcast (TIS-B). The goal of this integration method is to obtain an improved state estimate of a moving obstacle within the Field-of-View of the FLIR with added integrity. The focus of the paper will be on the approach phase of the flight. The paper will address methods to extract moving objects from the FLIR imagery and geo-reference these objects using outputs of both the onboard Global Positioning System (GPS) and the Inertial Navigation System (INS). The proposed extraction method uses a priori airport information and terrain databases. Furthermore, state information from the traffic information sources will be extracted and integrated with the state estimates from the FLIR. Finally, a method will be addressed that performs a consistency check between both sources of traffic information. The methods discussed in this paper will be evaluated using flight test data collected with a Gulfstream V in Reno, NV (GVSITE) and simulated ADS-B.
Camouflage during movement in the European cuttlefish (Sepia officinalis).
Josef, Noam; Berenshtein, Igal; Fiorito, Graziano; Sykes, António V; Shashar, Nadav
2015-11-01
A moving object is considered conspicuous because of the movement itself. When moving from one background to another, even dynamic camouflage experts such as cephalopods should sacrifice their extraordinary camouflage. Therefore, minimizing detection at this stage is crucial and highly beneficial. In this study, we describe a background-matching mechanism during movement, which aids the cuttlefish to downplay its presence throughout movement. In situ behavioural experiments using video and image analysis, revealed a delayed, sigmoidal, colour-changing mechanism during movement of Sepia officinalis across uniform black and grey backgrounds. This is a first important step in understanding dynamic camouflage during movement, and this new behavioural mechanism may be incorporated and applied to any dynamic camouflaging animal or man-made system on the move. © 2015. Published by The Company of Biologists Ltd.
Ernst, Zachary Raymond; Palmer, John; Boynton, Geoffrey M.
2012-01-01
In object-based attention, it is easier to divide attention between features within a single object than between features across objects. In this study we test the prediction of several capacity models in order to best characterize the cost to dividing attention between objects. Here we studied behavioral performance on a divided attention task in which subjects attended to the motion and luminance of overlapping random dot kinemategrams, specifically red upward moving dots superimposed with green downward moving dots. Subjects were required to detect brief changes (transients) in the motion or luminance within the same surface or across different surfaces. There were two primary results. First, the dual-task deficit was large when attention was divided across two surfaces and near zero when attention was divided within a surface. This is consistent with limited-capacity processing across surfaces and unlimited-capacity processing within a surface—a pattern predicted by established theories of object-based attention. Second and unexpectedly, there was evidence of crosstalk between features: when cued to monitor transients on one surface, response rates were inflated by the presence of a transient on the other surface. Such crosstalk is a failure of selective attention between surfaces. PMID:23149301
Bourbakis, N G
1997-01-01
This paper presents a generic traffic priority language, called KYKLOFORTA, used by autonomous robots for collision-free navigation in a dynamic unknown or known navigation space. In a previous work by X. Grossmman (1988), a set of traffic control rules was developed for the navigation of the robots on the lines of a two-dimensional (2-D) grid and a control center coordinated and synchronized their movements. In this work, the robots are considered autonomous: they are moving anywhere and in any direction inside the free space, and there is no need of a central control to coordinate and synchronize them. The requirements for each robot are i) visual perception, ii) range sensors, and iii) the ability of each robot to detect other moving objects in the same free navigation space, define the other objects perceived size, their velocity and their directions. Based on these assumptions, a traffic priority language is needed for each robot, making it able to decide during the navigation and avoid possible collision with other moving objects. The traffic priority language proposed here is based on a set of primitive traffic priority alphabet and rules which compose pattern of corridors for the application of the traffic priority rules.
Going, Going, Gone: Localizing Abrupt Offsets of Moving Objects
ERIC Educational Resources Information Center
Maus, Gerrit W.; Nijhawan, Romi
2009-01-01
When a moving object abruptly disappears, this profoundly influences its localization by the visual system. In Experiment 1, 2 aligned objects moved across the screen, and 1 of them abruptly disappeared. Observers reported seeing the objects misaligned at the time of the offset, with the continuing object leading. Experiment 2 showed that the…
Pedestrian detection based on redundant wavelet transform
NASA Astrophysics Data System (ADS)
Huang, Lin; Ji, Liping; Hu, Ping; Yang, Tiejun
2016-10-01
Intelligent video surveillance is to analysis video or image sequences captured by a fixed or mobile surveillance camera, including moving object detection, segmentation and recognition. By using it, we can be notified immediately in an abnormal situation. Pedestrian detection plays an important role in an intelligent video surveillance system, and it is also a key technology in the field of intelligent vehicle. So pedestrian detection has very vital significance in traffic management optimization, security early warn and abnormal behavior detection. Generally, pedestrian detection can be summarized as: first to estimate moving areas; then to extract features of region of interest; finally to classify using a classifier. Redundant wavelet transform (RWT) overcomes the deficiency of shift variant of discrete wavelet transform, and it has better performance in motion estimation when compared to discrete wavelet transform. Addressing the problem of the detection of multi-pedestrian with different speed, we present an algorithm of pedestrian detection based on motion estimation using RWT, combining histogram of oriented gradients (HOG) and support vector machine (SVM). Firstly, three intensities of movement (IoM) are estimated using RWT and the corresponding areas are segmented. According to the different IoM, a region proposal (RP) is generated. Then, the features of a RP is extracted using HOG. Finally, the features are fed into a SVM trained by pedestrian databases and the final detection results are gained. Experiments show that the proposed algorithm can detect pedestrians accurately and efficiently.
Mining moving object trajectories in location-based services for spatio-temporal database update
NASA Astrophysics Data System (ADS)
Guo, Danhuai; Cui, Weihong
2008-10-01
Advances in wireless transmission and mobile technology applied to LBS (Location-based Services) flood us with amounts of moving objects data. Vast amounts of gathered data from position sensors of mobile phones, PDAs, or vehicles hide interesting and valuable knowledge and describe the behavior of moving objects. The correlation between temporal moving patterns of moving objects and geo-feature spatio-temporal attribute was ignored, and the value of spatio-temporal trajectory data was not fully exploited too. Urban expanding or frequent town plan change bring about a large amount of outdated or imprecise data in spatial database of LBS, and they cannot be updated timely and efficiently by manual processing. In this paper we introduce a data mining approach to movement pattern extraction of moving objects, build a model to describe the relationship between movement patterns of LBS mobile objects and their environment, and put up with a spatio-temporal database update strategy in LBS database based on trajectories spatiotemporal mining. Experimental evaluation reveals excellent performance of the proposed model and strategy. Our original contribution include formulation of model of interaction between trajectory and its environment, design of spatio-temporal database update strategy based on moving objects data mining, and the experimental application of spatio-temporal database update by mining moving objects trajectories.
NASA Technical Reports Server (NTRS)
Lewis, Steven J.; Palacios, David M.
2013-01-01
This software can track multiple moving objects within a video stream simultaneously, use visual features to aid in the tracking, and initiate tracks based on object detection in a subregion. A simple programmatic interface allows plugging into larger image chain modeling suites. It extracts unique visual features for aid in tracking and later analysis, and includes sub-functionality for extracting visual features about an object identified within an image frame. Tracker Toolkit utilizes a feature extraction algorithm to tag each object with metadata features about its size, shape, color, and movement. Its functionality is independent of the scale of objects within a scene. The only assumption made on the tracked objects is that they move. There are no constraints on size within the scene, shape, or type of movement. The Tracker Toolkit is also capable of following an arbitrary number of objects in the same scene, identifying and propagating the track of each object from frame to frame. Target objects may be specified for tracking beforehand, or may be dynamically discovered within a tripwire region. Initialization of the Tracker Toolkit algorithm includes two steps: Initializing the data structures for tracked target objects, including targets preselected for tracking; and initializing the tripwire region. If no tripwire region is desired, this step is skipped. The tripwire region is an area within the frames that is always checked for new objects, and all new objects discovered within the region will be tracked until lost (by leaving the frame, stopping, or blending in to the background).
NASA Astrophysics Data System (ADS)
Liu, Kaizhan; Ye, Yunming; Li, Xutao; Li, Yan
2018-04-01
In recent years Convolutional Neural Network (CNN) has been widely used in computer vision field and makes great progress in lots of contents like object detection and classification. Even so, combining Convolutional Neural Network, which means making multiple CNN frameworks working synchronously and sharing their output information, could figure out useful message that each of them cannot provide singly. Here we introduce a method to real-time estimate speed of object by combining two CNN: YOLOv2 and FlowNet. In every frame, YOLOv2 provides object size; object location and object type while FlowNet providing the optical flow of whole image. On one hand, object size and object location help to select out the object part of optical flow image thus calculating out the average optical flow of every object. On the other hand, object type and object size help to figure out the relationship between optical flow and true speed by means of optics theory and priori knowledge. Therefore, with these two key information, speed of object can be estimated. This method manages to estimate multiple objects at real-time speed by only using a normal camera even in moving status, whose error is acceptable in most application fields like manless driving or robot vision.
Sensor planning for moving targets
NASA Astrophysics Data System (ADS)
Musman, Scott A.; Lehner, Paul; Elsaesser, Chris
1994-10-01
Planning a search for moving ground targets is difficult for humans and computationally intractable. This paper describes a technique to solve such problems. The main idea is to combine probability of detection assessments with computational search heuristics to generate sensor plans which approximately maximize either the probability of detection or a user- specified knowledge function (e.g., determining the target's probable destination; locating the enemy tanks). In contrast to super computer-based moving target search planning, our technique has been implemented using workstation technology. The data structures generated by sensor planning can be used to evaluate sensor reports during plan execution. Our system revises its objective function with each sensor report, allowing the user to assess both the current situation as well as the expected value of future information. This capability is particularly useful in situations involving a high rate of sensor reporting, helping the user focus his attention on sensors reports most pertinent to current needs. Our planning approach is implemented in a three layer architecture. The layers are: mobility analysis, followed by sensor coverage analysis, and concluding with sensor plan analysis. It is possible using these layers to describe the physical, spatial, and temporal characteristics of a scenario in the first two layers, and customize the final analysis to specific intelligence objectives. The architecture also allows a user to customize operational parameters in each of the three major components of the system. As examples of these performance options, we briefly describe the mobility analysis and discuss issues affecting sensor plan analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, Tushar, E-mail: tushar@barc.gov.in; Kashyap, Yogesh; Shukla, Mayank
Associated particle technique (APT) for detection of explosives is well established but has been implemented mostly for fixed portal systems. In certain situations, a portable system is required where the suspect object cannot be moved from site. This paper discusses the development of a portable APT system in single-sided geometry which can be transported to site and requires only one-sided access to the object. The system comprised D-T neutron source and bismuth germanate (BGO) detectors fixed on a portable module. Different aspects of the system have been discussed such as background contribution, time selection, and elemental signatures. The system wasmore » used to detect benign samples and explosive simulants under laboratory condition. The elemental ratios obtained by analyzing the gamma spectra show good match with the theoretical ratios.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-28
...) Over the horizon high frequency sky-wave (ionosphere) radar; (xvi) Radar that detects a moving object... any dimension equal to or less than one quarter (\\1/4\\) wavelength of the highest operating frequency... capability; (B) Operating frequency less than 20 kHz; (C) Bandwidth greater than 10 kHz; or (D) Capable of...
ERIC Educational Resources Information Center
Damonte, Kathleen
2005-01-01
A fly is buzzing around in the kitchen. You sneak up on it with a flyswatter, but just as you get close to it, it flies away. What makes flies and other insects so good at escaping from danger? The fact that insects have eyesight that can easily detect moving objects is one of the things that help them survive. In this month's Science Shorts,…
System and method for tracking a signal source. [employing feedback control
NASA Technical Reports Server (NTRS)
Mogavero, L. N.; Johnson, E. G.; Evans, J. M., Jr.; Albus, J. S. (Inventor)
1978-01-01
A system for tracking moving signal sources is disclosed which is particularly adaptable for use in tracking stage performers. A miniature transmitter is attached to the person or object to be tracked and emits a detectable signal of a predetermined frequency. A plurality of detectors positioned in a preset pattern sense the signal and supply output information to a phase detector which applies signals representing the angular orientation of the transmitter to a computer. The computer provides command signals to a servo network which drives a device such as a motor driven mirror reflecting the beam of a spotlight, to track the moving transmitter.
2003-07-18
KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-B, Cape Canaveral Air Force Station, the first stage of a Delta II rocket is moved into the mobile service tower. The rocket is being erected to launch the Space InfraRed Telescope Facility (SIRTF). Consisting of an 0.85-meter telescope and three cryogenically cooled science instruments, SIRTF is one of NASA's largest infrared telescopes to be launched. SIRTF will obtain images and spectra by detecting the infrared energy, or heat, radiated by objects in space. Most of this infrared radiation is blocked by the Earth's atmosphere and cannot be observed from the ground.
2003-07-18
KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-B, Cape Canaveral Air Force Station, the first stage of a Delta II rocket is nearly erect for its move into the mobile service tower. The rocket is being erected to launch the Space InfraRed Telescope Facility (SIRTF). Consisting of an 0.85-meter telescope and three cryogenically cooled science instruments, SIRTF is one of NASA's largest infrared telescopes to be launched. SIRTF will obtain images and spectra by detecting the infrared energy, or heat, radiated by objects in space. Most of this infrared radiation is blocked by the Earth's atmosphere and cannot be observed from the ground.
Comparison of visual sensitivity to human and object motion in autism spectrum disorder.
Kaiser, Martha D; Delmolino, Lara; Tanaka, James W; Shiffrar, Maggie
2010-08-01
Successful social behavior requires the accurate detection of other people's movements. Consistent with this, typical observers demonstrate enhanced visual sensitivity to human movement relative to equally complex, nonhuman movement [e.g., Pinto & Shiffrar, 2009]. A psychophysical study investigated visual sensitivity to human motion relative to object motion in observers with autism spectrum disorder (ASD). Participants viewed point-light depictions of a moving person and, for comparison, a moving tractor and discriminated between coherent and scrambled versions of these stimuli in unmasked and masked displays. There were three groups of participants: young adults with ASD, typically developing young adults, and typically developing children. Across masking conditions, typical observers showed enhanced visual sensitivity to human movement while observers in the ASD group did not. Because the human body is an inherently social stimulus, this result is consistent with social brain theories [e.g., Pelphrey & Carter, 2008; Schultz, 2005] and suggests that the visual systems of individuals with ASD may not be tuned for the detection of socially relevant information such as the presence of another person. Reduced visual sensitivity to human movements could compromise important social behaviors including, for example, gesture comprehension.
Gaia-GBOT asteroid finding programme (gbot.obspm.fr)
NASA Astrophysics Data System (ADS)
Bouquillon, Sébastien; Altmann, Martin; Taris, Francois; Barache, Christophe; Carlucci, Teddy; Tanga, Paolo; Thuillot, William; Marchant, Jon; Steele, Iain; Lister, Tim; Berthier, Jerome; Carry, Benoit; David, Pedro; Cellino, Alberto; Hestroffer, Daniel J.; Andrei, Alexandre Humberto; Smart, Ricky
2016-10-01
The Ground Based Optical Tracking group (GBOT) consists of about ten scientists involved in the Gaia mission by ESA. Its main task is the optical tracking of the Gaia satellite itself [1]. This novel tracking method in addition to radiometric standard ones is necessary to ensure that the Gaia mission goal in terms of astrometric precision level is reached for all objects. This optical tracking is based on daily observations performed throughout the mission by using the optical CCDs of ESO's VST in Chile, of Liverpool Telescope in La Palma and of the two LCOGT's Faulkes Telescopes in Hawaii and Australia. Each night, GBOT attempts to obtain a sequence of frames covering a 20 min total period and close to Gaia meridian transit time. In each sequence, Gaia is seen as a faint moving object (Rmag ~ 21, speed > 1"/min) and its daily astrometric accuracy has to be better than 0.02" to meet the Gaia mission requirements. The GBOT Astrometric Reduction Pipeline (GARP) [2] has been specifically developed to reach this precision.More recently, a secondary task has been assigned to GBOT which consists detecting and analysing Solar System Objects (SSOs) serendipitously recorded in the GBOT data. Indeed, since Gaia oscillates around the Sun-Earth L2 point, the fields of GBOT observations are near the Ecliptic and roughly located opposite to the Sun which is advantageous for SSO observations and studies. In particular, these SSO data can potentially be very useful to help in the determination of their absolute magnitudes, with important applications to the scientific exploitation of the WISE and Gaia missions. For these reasons, an automatic SSO detection system has been created to identify moving objects in GBOT sequences of observations. Since the beginning of 2015, this SSO detection system, added to GARP for performing high precision astrometry for SSOs, is fully operational. To this date, around 9000 asteroids have been detected. The mean delay between the time of observation and the submission of the SSO reduction results to the MPC is less than 12 hours allowing rapid follow up of new objects.[1] Altmann et al. 2014, SPIE, 9149.[2] Bouquillon et al. 2014, SPIE, 9152.
Attentional enhancement during multiple-object tracking.
Drew, Trafton; McCollough, Andrew W; Horowitz, Todd S; Vogel, Edward K
2009-04-01
What is the role of attention in multiple-object tracking? Does attention enhance target representations, suppress distractor representations, or both? It is difficult to ask this question in a purely behavioral paradigm without altering the very attentional allocation one is trying to measure. In the present study, we used event-related potentials to examine the early visual evoked responses to task-irrelevant probes without requiring an additional detection task. Subjects tracked two targets among four moving distractors and four stationary distractors. Brief probes were flashed on targets, moving distractors, stationary distractors, or empty space. We obtained a significant enhancement of the visually evoked P1 and N1 components (approximately 100-150 msec) for probes on targets, relative to distractors. Furthermore, good trackers showed larger differences between target and distractor probes than did poor trackers. These results provide evidence of early attentional enhancement of tracked target items and also provide a novel approach to measuring attentional allocation during tracking.
Multidisciplinary unmanned technology teammate (MUTT)
NASA Astrophysics Data System (ADS)
Uzunovic, Nenad; Schneider, Anne; Lacaze, Alberto; Murphy, Karl; Del Giorno, Mark
2013-01-01
The U.S. Army Tank Automotive Research, Development and Engineering Center (TARDEC) held an autonomous robot competition called CANINE in June 2012. The goal of the competition was to develop innovative and natural control methods for robots. This paper describes the winning technology, including the vision system, the operator interaction, and the autonomous mobility. The rules stated only gestures or voice commands could be used for control. The robots would learn a new object at the start of each phase, find the object after it was thrown into a field, and return the object to the operator. Each of the six phases became more difficult, including clutter of the same color or shape as the object, moving and stationary obstacles, and finding the operator who moved from the starting location to a new location. The Robotic Research Team integrated techniques in computer vision, speech recognition, object manipulation, and autonomous navigation. A multi-filter computer vision solution reliably detected the objects while rejecting objects of similar color or shape, even while the robot was in motion. A speech-based interface with short commands provided close to natural communication of complicated commands from the operator to the robot. An innovative gripper design allowed for efficient object pickup. A robust autonomous mobility and navigation solution for ground robotic platforms provided fast and reliable obstacle avoidance and course navigation. The research approach focused on winning the competition while remaining cognizant and relevant to real world applications.
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
A framework for activity detection in wide-area motion imagery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porter, Reid B; Ruggiero, Christy E; Morrison, Jack D
2009-01-01
Wide-area persistent imaging systems are becoming increasingly cost effective and now large areas of the earth can be imaged at relatively high frame rates (1-2 fps). The efficient exploitation of the large geo-spatial-temporal datasets produced by these systems poses significant technical challenges for image and video analysis and data mining. In recent years there has been significant progress made on stabilization, moving object detection and tracking and automated systems now generate hundreds to thousands of vehicle tracks from raw data, with little human intervention. However, the tracking performance at this scale, is unreliable and average track length is much smallermore » than the average vehicle route. This is a limiting factor for applications which depend heavily on track identity, i.e. tracking vehicles from their points of origin to their final destination. In this paper we propose and investigate a framework for wide-area motion imagery (W AMI) exploitation that minimizes the dependence on track identity. In its current form this framework takes noisy, incomplete moving object detection tracks as input, and produces a small set of activities (e.g. multi-vehicle meetings) as output. The framework can be used to focus and direct human users and additional computation, and suggests a path towards high-level content extraction by learning from the human-in-the-loop.« less
NASA Astrophysics Data System (ADS)
Truebenbach, Alexandra; Darling, Jeremy
2018-01-01
We present the VLBA Extragalactic Proper Motion Catalog, a catalog of extragalactic proper motions created using archival VLBI data and our own VLBA astrometry. The catalog contains 713 proper motions, with average uncertainties of ~ 24 microarcsec/yr, including 40 new or improved proper motion measurements using relative astrometry with the VLBA. We detect the secular aberration drift – the apparent motion of extragalactic objects caused by the solar system's acceleration around the Galactic Center – at 6.3 sigma significance with an amplitude of 1.69 +/- 0.27 microarcsec/yr and an apex consistent with the Galactic Center (275.2 +/- 10.0 deg, -29.4 +/- 8.8 deg). Our dipole model detects the aberration drift at a higher significance than some previous studies (e.g., Titov & Lambert 2013), but at a lower amplitude than expected or previously measured. We then use the correlated relative proper motions of extragalactic objects to place upper limits on the rate of large-scale structure collapse (e.g., Quercellini et al. 2009; Darling 2013). Pairs of small separation objects that are in gravitationally interacting structures such as filaments of large-scale structure will show a net decrease in angular separation (> - 15.5 microarcsec/yr) as they move towards each other, while pairs of large separation objects that are gravitationally unbound and move with the Hubble expansion will show no net change in angular separation. With our catalog, we place a 3 sigma limit on the rate of convergence of large-scale structure of -11.4 microarcsec/yr for extragalactic objects within 100 comoving Mpc of each other. We also confirm that large separation objects (> 800 comoving Mpc) move with the Hubble flow to within ~ 2.2 microarcsec/yr. In the future, we plan to incorporate the upcoming Gaia proper motions into our catalog to achieve a higher precision measurement of the average relative proper motion of gravitationally interacting extragalactic objects and to refine our measurement of the collapse of large-scale structure. This research was performed with support from the NSF grant AST-1411605.Darling, J. 2013, AJ, 777, L21; Quercellini et al. 2009. Phys. Rev. Lett., 102, 151302; Titov, O. & Lambert, S. 2013, A&A, 559, A95
NASA Astrophysics Data System (ADS)
Trammell, Hoke S., III; Perry, Alexander R.; Kumar, Sankaran; Czipott, Peter V.; Whitecotton, Brian R.; McManus, Tobin J.; Walsh, David O.
2005-05-01
Magnetic sensors configured as a tensor magnetic gradiometer not only detect magnetic targets, but also determine their location and their magnetic moment. Magnetic moment information can be used to characterize and classify objects. Unexploded ordnance (UXO) and thus many types of improvised explosive device (IED) contain steel, and thus can be detected magnetically. Suitable unmanned aerial vehicle (UAV) platforms, both gliders and powered craft, can enable coverage of a search area much more rapidly than surveys using, for instance, total-field magnetometers. We present data from gradiometer passes over different shells using a gradiometer mounted on a moving cart. We also provide detection range and speed estimates for aerial detection by a UAV.
Fechler, K; Holtkamp, D; Neusel, G; Sanguinetti-Scheck, J I; Budelli, R; von der Emde, G
2012-12-01
In a food-rewarded two-alternative forced-choice procedure, it was determined how well the weakly electric elephantnose fish Gnathonemus petersii can sense gaps between two objects, some of which were placed in front of complex backgrounds. The results show that at close distances, G. petersii is able to detect gaps between two small metal cubes (2 cm × 2 cm × 2 cm) down to a width of c. 1·5 mm. When larger objects (3 cm × 3 cm × 3 cm) were used, gaps with a width of 2-3 mm could still be detected. Discrimination performance was better (c. 1 mm gap size) when the objects were placed in front of a moving background consisting of plastic stripes or plant leaves, indicating that movement in the environment plays an important role for object identification. In addition, the smallest gap size that could be detected at increasing distances was determined. A linear relationship between object distance and gap size existed. Minimal detectable gap sizes increased from c. 1·5 mm at a distance of 1 cm, to 20 mm at a distance of 7 cm. Measurements and simulations of the electric stimuli occurring during gap detection revealed that the electric images of two close objects influence each other and superimpose. A large gap of 20 mm between two objects induced two clearly separated peaks in the electric image, while a 2 mm gap caused just a slight indentation in the image. Therefore, the fusion of electric images limits spatial resolution during active electrolocation. Relative movements either between the fish and the objects or between object and background might improve spatial resolution by accentuating the fine details of the electric images. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.
Vertical flow chemical detection portal
Linker, K.L.; Hannum, D.W.; Conrad, F.J.
1999-06-22
A portal apparatus is described for screening objects or persons for the presence of trace amounts of chemical substances such as illicit drugs or explosives. The apparatus has a test space, in which a person may stand, defined by two generally upright sides spanned by a horizontal transom. One or more fans in the transom generate a downward air flow (uni-directional) within the test space. The air flows downwardly from a high pressure upper zone, past the object or person to be screened. Air moving past the object dislodges from the surface thereof both volatile and nonvolatile particles of the target substance. The particles are entrained into the air flow which continues flowing downward to a lower zone of reduced pressure, where the particle-bearing air stream is directed out of the test space and toward preconcentrator and detection components. The sides of the portal are specially configured to partially contain and maintain the air flow. 3 figs.
Vertical flow chemical detection portal
Linker, Kevin L.; Hannum, David W.; Conrad, Frank James
1999-01-01
A portal apparatus for screening objects or persons for the presence of trace amounts of chemical substances such as illicit drugs or explosives. The apparatus has a test space, in which a person may stand, defined by two generally upright sides spanned by a horizontal transom. One or more fans in the transom generate a downward air flow (uni-directional) within the test space. The air flows downwardly from a high pressure upper zone, past the object or person to be screened. Air moving past the object dislodges from the surface thereof both volatile and nonvolatile particles of the target substance. The particles are entrained into the air flow which continues flowing downward to a lower zone of reduced pressure, where the particle-bearing air stream is directed out of the test space and toward preconcentrator and detection components. The sides of the portal are specially configured to partially contain and maintain the air flow.
Implementation of jump-diffusion algorithms for understanding FLIR scenes
NASA Astrophysics Data System (ADS)
Lanterman, Aaron D.; Miller, Michael I.; Snyder, Donald L.
1995-07-01
Our pattern theoretic approach to the automated understanding of forward-looking infrared (FLIR) images brings the traditionally separate endeavors of detection, tracking, and recognition together into a unified jump-diffusion process. New objects are detected and object types are recognized through discrete jump moves. Between jumps, the location and orientation of objects are estimated via continuous diffusions. An hypothesized scene, simulated from the emissive characteristics of the hypothesized scene elements, is compared with the collected data by a likelihood function based on sensor statistics. This likelihood is combined with a prior distribution defined over the set of possible scenes to form a posterior distribution. The jump-diffusion process empirically generates the posterior distribution. Both the diffusion and jump operations involve the simulation of a scene produced by a hypothesized configuration. Scene simulation is most effectively accomplished by pipelined rendering engines such as silicon graphics. We demonstrate the execution of our algorithm on a silicon graphics onyx/reality engine.
Doublet Pulse Coherent Laser Radar for Tracking of Resident Space Objects
NASA Technical Reports Server (NTRS)
Prasad, Narasimha S.; Rudd, Van; Shald, Scott; Sandford, Stephen; Dimarcantonio, Albert
2014-01-01
In this paper, the development of a long range ladar system known as ExoSPEAR at NASA Langley Research Center for tracking rapidly moving resident space objects is discussed. Based on 100 W, nanosecond class, near-IR laser, this ladar system with coherent detection technique is currently being investigated for short dwell time measurements of resident space objects (RSOs) in LEO and beyond for space surveillance applications. This unique ladar architecture is configured using a continuously agile doublet-pulse waveform scheme coupled to a closed-loop tracking and control loop approach to simultaneously achieve mm class range precision and mm/s velocity precision and hence obtain unprecedented track accuracies. Salient features of the design architecture followed by performance modeling and engagement simulations illustrating the dependence of range and velocity precision in LEO orbits on ladar parameters are presented. Estimated limits on detectable optical cross sections of RSOs in LEO orbits are discussed.
FieldSAFE: Dataset for Obstacle Detection in Agriculture.
Kragh, Mikkel Fly; Christiansen, Peter; Laursen, Morten Stigaard; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik; Jørgensen, Rasmus Nyholm
2017-11-09
In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360 ∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.
FieldSAFE: Dataset for Obstacle Detection in Agriculture
Christiansen, Peter; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik
2017-01-01
In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates. PMID:29120383
Systematic distortions of perceptual stability investigated using immersive virtual reality
Tcheang, Lili; Gilson, Stuart J.; Glennerster, Andrew
2010-01-01
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers under-estimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an under-estimation of distance walked. We discuss implications for theories of a task-independent representation of visual space. PMID:15845248
Dokka, Kalpana; DeAngelis, Gregory C.
2015-01-01
Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion. PMID:26446214
Formal Verification of Safety Buffers for Sate-Based Conflict Detection and Resolution
NASA Technical Reports Server (NTRS)
Herencia-Zapana, Heber; Jeannin, Jean-Baptiste; Munoz, Cesar A.
2010-01-01
The information provided by global positioning systems is never totally exact, and there are always errors when measuring position and velocity of moving objects such as aircraft. This paper studies the effects of these errors in the actual separation of aircraft in the context of state-based conflict detection and resolution. Assuming that the state information is uncertain but that bounds on the errors are known, this paper provides an analytical definition of a safety buffer and sufficient conditions under which this buffer guarantees that actual conflicts are detected and solved. The results are presented as theorems, which were formally proven using a mechanical theorem prover.
Laser Prevention of Earth Impact Disasters
NASA Technical Reports Server (NTRS)
Campbell, J.; Smalley, L.; Boccio, D.; Howell, Joe T. (Technical Monitor)
2002-01-01
We now believe that while there are about 2000 earth orbit crossing rocks greater than 1 kilometer in diameter, there may be as many as 100,000 or more objects in the 100m size range. Can anything be done about this fundamental existence question facing us? The answer is a resounding yes! We have the technology to prevent collisions. By using an intelligent combination of Earth and space based sensors coupled with an infrastructure of high-energy laser stations and other secondary mitigation options, we can deflect inbound asteroids, meteoroids, and comets and prevent them from striking the Earth. This can be accomplished by irradiating the surface of an inbound rock with sufficiently intense pulses so that ablation occurs. This ablation acts as a small rocket incrementally changing the shape of the rock's orbit around the Sun. One-kilometer size rocks can be moved sufficiently in a month while smaller rocks may be moved in a shorter time span.We recommend that the World's space objectives be immediately reprioritized to start us moving quickly towards a multiple option defense capability. While lasers should be the primary approach, all mitigation options depend on robust early warning, detection, and tracking resources to find objects sufficiently prior to Earth orbit passage in time to allow mitigation. Infrastructure options should include ground, LEO, GEO, Lunar, and libration point laser and sensor stations for providing early warning, tracking, and deflection. Other options should include space interceptors that will carry both laser and nuclear ablators for close range work. Response options must be developed to deal with the consequences of an impact should we move too slowly.
Asteroids in the High Cadence Transient Survey
NASA Astrophysics Data System (ADS)
Peña, J.; Fuentes, C.; Förster, F.; Maureira, J. C.; San Martín, J.; Littín, J.; Huijse, P.; Cabrera-Vives, G.; Estévez, P. A.; Galbany, L.; González-Gaitán, S.; Martínez, J.; de Jaeger, Th.; Hamuy, M.
2018-03-01
We report on the serendipitous observations of solar system objects imaged during the High cadence Transient Survey 2014 observation campaign. Data from this high-cadence wide-field survey was originally analyzed for finding variable static sources using machine learning to select the most-likely candidates. In this work, we search for moving transients consistent with solar system objects and derive their orbital parameters. We use a simple, custom motion detection algorithm to link trajectories and assume Keplerian motion to derive the asteroid’s orbital parameters. We use known asteroids from the Minor Planet Center database to assess the detection efficiency of the survey and our search algorithm. Trajectories have an average of nine detections spread over two days, and our fit yields typical errors of {σ }a∼ 0.07 {au}, σ e ∼ 0.07 and σ i ∼ 0.°5 in semimajor axis, eccentricity, and inclination, respectively, for known asteroids in our sample. We extract 7700 orbits from our trajectories, identifying 19 near-Earth objects, 6687 asteroids, 14 Centaurs, and 15 trans-Neptunian objects. This highlights the complementarity of supernova wide-field surveys for solar system research and the significance of machine learning to clean data of false detections. It is a good example of the data-driven science that Large Synoptic Survey Telescope will deliver.
ERIC Educational Resources Information Center
Damonte, Kathleen
2004-01-01
One thing scientists study is how objects move. A famous scientist named Sir Isaac Newton (1642-1727) spent a lot of time observing objects in motion and came up with three laws that describe how things move. This explanation only deals with the first of his three laws of motion. Newton's First Law of Motion says that moving objects will continue…
First results of the Test-Bed Telescopes (TBT) project: Cebreros telescope commissioning
NASA Astrophysics Data System (ADS)
Ocaña, Francisco; Ibarra, Aitor; Racero, Elena; Montero, Ángel; Doubek, Jirí; Ruiz, Vicente
2016-07-01
The TBT project is being developed under ESA's General Studies and Technology Programme (GSTP), and shall implement a test-bed for the validation of an autonomous optical observing system in a realistic scenario within the Space Situational Awareness (SSA) programme of the European Space Agency (ESA). The goal of the project is to provide two fully robotic telescopes, which will serve as prototypes for development of a future network. The system consists of two telescopes, one in Spain and the second one in the Southern Hemisphere. The telescope is a fast astrograph with a large Field of View (FoV) of 2.5 x 2.5 square-degrees and a plate scale of 2.2 arcsec/pixel. The tube is mounted on a fast direct-drive mount moving with speed up to 20 degrees per second. The focal plane hosts a 2-port 4K x 4K back-illuminated CCD with readout speeds up to 1MHz per port. All these characteristics ensure good survey performance for transients and fast moving objects. Detection software and hardware are optimised for the detection of NEOs and objects in high Earth orbits (objects moving from 0.1-40 arcsec/second). Nominal exposures are in the range from 2 to 30 seconds, depending on the observational strategy. Part of the validation scenario involves the scheduling concept integrated in the robotic operations for both sensors. Every night it takes all the input needed and prepares a schedule following predefined rules allocating tasks for the telescopes. Telescopes are managed by RTS2 control software, that performs the real-time scheduling of the observation and manages all the devices at the observatory.1 At the end of the night the observing systems report astrometric positions and photometry of the objects detected. The first telescope was installed in Cebreros Satellite Tracking Station in mid-2015. It is currently in the commissioning phase and we present here the first results of the telescope. We evaluate the site characteristics and the performance of the TBT Cebreros telescope in the different modes and strategies. Average residuals for asteroids are under 0.5 arcsecond, while they are around 1 arcsecond for upper-MEO* and GEO† satellites. The survey depth is dimmer than magnitude 18.5 for 30-second exposures with the usual seeing around 4 arcseconds.
Neural basis for dynamic updating of object representation in visual working memory.
Takahama, Sachiko; Miyauchi, Satoru; Saiki, Jun
2010-02-15
In real world, objects have multiple features and change dynamically. Thus, object representations must satisfy dynamic updating and feature binding. Previous studies have investigated the neural activity of dynamic updating or feature binding alone, but not both simultaneously. We investigated the neural basis of feature-bound object representation in a dynamically updating situation by conducting a multiple object permanence tracking task, which required observers to simultaneously process both the maintenance and dynamic updating of feature-bound objects. Using an event-related design, we separated activities during memory maintenance and change detection. In the search for regions showing selective activation in dynamic updating of feature-bound objects, we identified a network during memory maintenance that was comprised of the inferior precentral sulcus, superior parietal lobule, and middle frontal gyrus. In the change detection period, various prefrontal regions, including the anterior prefrontal cortex, were activated. In updating object representation of dynamically moving objects, the inferior precentral sulcus closely cooperates with a so-called "frontoparietal network", and subregions of the frontoparietal network can be decomposed into those sensitive to spatial updating and feature binding. The anterior prefrontal cortex identifies changes in object representation by comparing memory and perceptual representations rather than maintaining object representations per se, as previously suggested. Copyright 2009 Elsevier Inc. All rights reserved.
Infrared video based gas leak detection method using modified FAST features
NASA Astrophysics Data System (ADS)
Wang, Min; Hong, Hanyu; Huang, Likun
2018-03-01
In order to detect the invisible leaking gas that is usually dangerous and easily leads to fire or explosion in time, many new technologies have arisen in the recent years, among which the infrared video based gas leak detection is widely recognized as a viable tool. However, all the moving regions of a video frame can be detected as leaking gas regions by the existing infrared video based gas leak detection methods, without discriminating the property of each detected region, e.g., a walking person in a video frame may be also detected as gas by the current gas leak detection methods.To solve this problem, we propose a novel infrared video based gas leak detection method in this paper, which is able to effectively suppress strong motion disturbances.Firstly, the Gaussian mixture model(GMM) is used to establish the background model.Then due to the observation that the shapes of gas regions are different from most rigid moving objects, we modify the Features From Accelerated Segment Test (FAST) algorithm and use the modified FAST (mFAST) features to describe each connected component. In view of the fact that the statistical property of the mFAST features extracted from gas regions is different from that of other motion regions, we propose the Pixel-Per-Points (PPP) condition to further select candidate connected components.Experimental results show that the algorithm is able to effectively suppress most strong motion disturbances and achieve real-time leaking gas detection.
Dynamic Projection Mapping onto Deforming Non-Rigid Surface Using Deformable Dot Cluster Marker.
Narita, Gaku; Watanabe, Yoshihiro; Ishikawa, Masatoshi
2017-03-01
Dynamic projection mapping for moving objects has attracted much attention in recent years. However, conventional approaches have faced some issues, such as the target objects being limited to rigid objects, and the limited moving speed of the targets. In this paper, we focus on dynamic projection mapping onto rapidly deforming non-rigid surfaces with a speed sufficiently high that a human does not perceive any misalignment between the target object and the projected images. In order to achieve such projection mapping, we need a high-speed technique for tracking non-rigid surfaces, which is still a challenging problem in the field of computer vision. We propose the Deformable Dot Cluster Marker (DDCM), a novel fiducial marker for high-speed tracking of non-rigid surfaces using a high-frame-rate camera. The DDCM has three performance advantages. First, it can be detected even when it is strongly deformed. Second, it realizes robust tracking even in the presence of external and self occlusions. Third, it allows millisecond-order computational speed. Using DDCM and a high-speed projector, we realized dynamic projection mapping onto a deformed sheet of paper and a T-shirt with a speed sufficiently high that the projected images appeared to be printed on the objects.
Secular Extragalactic Parallax and Geometric Distances with Gaia Proper Motions
NASA Astrophysics Data System (ADS)
Paine, Jennie; Darling, Jeremiah K.
2018-06-01
The motion of the Solar System with respect to the cosmic microwave background (CMB) rest frame creates a well measured dipole in the CMB, which corresponds to a linear solar velocity of about 78 AU/yr. This motion causes relatively nearby extragalactic objects to appear to move compared to more distant objects, an effect that can be measured in the proper motions of nearby galaxies. An object at 1 Mpc and perpendicular to the CMB apex will exhibit a secular parallax, observed as a proper motion, of 78 µas/yr. The relatively large peculiar motions of galaxies make the detection of secular parallax challenging for individual objects. Instead, a statistical parallax measurement can be made for a sample of objects with proper motions, where the global parallax signal is modeled as an E-mode dipole that diminishes linearly with distance. We present preliminary results of applying this model to a sample of nearby galaxies with Gaia proper motions to detect the statistical secular parallax signal. The statistical measurement can be used to calibrate the canonical cosmological “distance ladder.”
Hardware accelerator design for tracking in smart camera
NASA Astrophysics Data System (ADS)
Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil
2011-10-01
Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.
Ultrafast dark-field surface inspection with hybrid-dispersion laser scanning
NASA Astrophysics Data System (ADS)
Yazaki, Akio; Kim, Chanju; Chan, Jacky; Mahjoubfar, Ata; Goda, Keisuke; Watanabe, Masahiro; Jalali, Bahram
2014-06-01
High-speed surface inspection plays an important role in industrial manufacturing, safety monitoring, and quality control. It is desirable to go beyond the speed limitation of current technologies for reducing manufacturing costs and opening a new window onto a class of applications that require high-throughput sensing. Here, we report a high-speed dark-field surface inspector for detection of micrometer-sized surface defects that can travel at a record high speed as high as a few kilometers per second. This method is based on a modified time-stretch microscope that illuminates temporally and spatially dispersed laser pulses on the surface of a fast-moving object and detects scattered light from defects on the surface with a sensitive photodetector in a dark-field configuration. The inspector's ability to perform ultrafast dark-field surface inspection enables real-time identification of difficult-to-detect features on weakly reflecting surfaces and hence renders the method much more practical than in the previously demonstrated bright-field configuration. Consequently, our inspector provides nearly 1000 times higher scanning speed than conventional inspectors. To show our method's broad utility, we demonstrate real-time inspection of the surface of various objects (a non-reflective black film, transparent flexible film, and reflective hard disk) for detection of 10 μm or smaller defects on a moving target at 20 m/s within a scan width of 25 mm at a scan rate of 90.9 MHz. Our method holds promise for improving the cost and performance of organic light-emitting diode displays for next-generation smart phones, lithium-ion batteries for green electronics, and high-efficiency solar cells.
Method for targetless tracking subpixel in-plane movements.
Espinosa, Julian; Perez, Jorge; Ferrer, Belen; Mas, David
2015-09-01
We present a targetless motion tracking method for detecting planar movements with subpixel accuracy. This method is based on the computation and tracking of the intersection of two nonparallel straight-line segments in the image of a moving object in a scene. The method is simple and easy to implement because no complex structures have to be detected. It has been tested and validated using a lab experiment consisting of a vibrating object that was recorded with a high-speed camera working at 1000 fps. We managed to track displacements with an accuracy of hundredths of pixel or even of thousandths of pixel in the case of tracking harmonic vibrations. The method is widely applicable because it can be used for distance measuring amplitude and frequency of vibrations with a vision system.
Velazquez-Pupo, Roxana; Sierra-Romero, Alberto; Torres-Roman, Deni; Shkvarko, Yuriy V.; Romero-Delgado, Misael
2018-01-01
This paper presents a high performance vision-based system with a single static camera for traffic surveillance, for moving vehicle detection with occlusion handling, tracking, counting, and One Class Support Vector Machine (OC-SVM) classification. In this approach, moving objects are first segmented from the background using the adaptive Gaussian Mixture Model (GMM). After that, several geometric features are extracted, such as vehicle area, height, width, centroid, and bounding box. As occlusion is present, an algorithm was implemented to reduce it. The tracking is performed with adaptive Kalman filter. Finally, the selected geometric features: estimated area, height, and width are used by different classifiers in order to sort vehicles into three classes: small, midsize, and large. Extensive experimental results in eight real traffic videos with more than 4000 ground truth vehicles have shown that the improved system can run in real time under an occlusion index of 0.312 and classify vehicles with a global detection rate or recall, precision, and F-measure of up to 98.190%, and an F-measure of up to 99.051% for midsize vehicles. PMID:29382078
An automated data exploitation system for airborne sensors
NASA Astrophysics Data System (ADS)
Chen, Hai-Wen; McGurr, Mike
2014-06-01
Advanced wide area persistent surveillance (WAPS) sensor systems on manned or unmanned airborne vehicles are essential for wide-area urban security monitoring in order to protect our people and our warfighter from terrorist attacks. Currently, human (imagery) analysts process huge data collections from full motion video (FMV) for data exploitation and analysis (real-time and forensic), providing slow and inaccurate results. An Automated Data Exploitation System (ADES) is urgently needed. In this paper, we present a recently developed ADES for airborne vehicles under heavy urban background clutter conditions. This system includes four processes: (1) fast image registration, stabilization, and mosaicking; (2) advanced non-linear morphological moving target detection; (3) robust multiple target (vehicles, dismounts, and human) tracking (up to 100 target tracks); and (4) moving or static target/object recognition (super-resolution). Test results with real FMV data indicate that our ADES can reliably detect, track, and recognize multiple vehicles under heavy urban background clutters. Furthermore, our example shows that ADES as a baseline platform can provide capability for vehicle abnormal behavior detection to help imagery analysts quickly trace down potential threats and crimes.
Variability of the lowest mass objects in the AB Doradus moving group
NASA Astrophysics Data System (ADS)
Vos, Johanna M.; Allers, Katelyn N.; Biller, Beth A.; Liu, Michael C.; Dupuy, Trent J.; Gallimore, Jack F.; Adenuga, Iyadunni J.; Best, William M. J.
2018-02-01
We present the detection of [3.6 μm] photometric variability in two young, L/T transition brown dwarfs, WISE J004701.06+680352.1 (W0047) and 2MASS J2244316+204343 (2M2244) using the Spitzer Space Telescope. We find a period of 16.4 ± 0.2 h and a peak-to-peak amplitude of 1.07 ± 0.04 per cent for W0047, and a period of 11 ± 2 h and amplitude of 0.8 ± 0.2 per cent for 2M2244. This period is significantly longer than that measured previously during a shorter observation. We additionally detect significant J-band variability in 2M2244 using the Wide-Field Camera on UKIRT. We determine the radial and rotational velocities of both objects using Keck NIRSPEC data. We find a radial velocity of -16.0_{-0.9}^{+0.8} km s-1 for 2M2244, and confirm it as a bona fide member of the AB Doradus moving group. We find rotational velocities of v sin i = 9.8 ± 0.3 and 14.3^{+1.4}_{-1.5} km s-1 for W0047 and 2M2244, respectively. With inclination angles of 85°+5-9 and 76°+14-20, W0047 and 2M2244 are viewed roughly equator-on. Their remarkably similar colours, spectra and inclinations are consistent with the possibility that viewing angle may influence atmospheric appearance. We additionally present Spitzer [4.5 μm] monitoring of the young, T5.5 object SDSS111010+011613 (SDSS1110) where we detect no variability. For periods <18 h, we place an upper limit of 1.25 per cent on the peak-to-peak variability amplitude of SDSS1110.
Melis-Dankers, Bart J. M.; Brouwer, Wiebo H.; Tucha, Oliver; Heutink, Joost
2016-01-01
Introduction People with homonymous visual field defects (HVFD) often report difficulty detecting obstacles in the periphery on their blind side in time when moving around. Recently, a randomized controlled trial showed that the InSight-Hemianopia Compensatory Scanning Training (IH-CST) specifically improved detection of peripheral stimuli and avoiding obstacles when moving around, especially in dual task situations. Method The within-group training effects of the previously reported IH-CST are examined in an extended patient group. Performance of patients with HVFD on a pre-assessment, post-assessment and follow-up assessment and performance of a healthy control group are compared. Furthermore, it is examined whether training effects can be predicted by demographic characteristics, variables related to the visual disorder, and neuropsychological test results. Results Performance on both subjective and objective measures of mobility-related scanning was improved after training, while no evidence was found for improvement in visual functions (including visual fields), reading, visual search and dot counting. Self-reported improvement did not correlate with improvement in objective mobility performance. According to the participants, the positive effects were still present six to ten months after training. No demographic characteristics, variables related to the visual disorder, and neuropsychological test results were found to predict the size of training effect, although some inconclusive evidence was found for more improvement in patients with left-sided HVFD than in patients with right-sided HFVD. Conclusion Further support was found for a positive effect of IH-CST on detection of visual stimuli during mobility-related activities specifically. Based on the reports given by patients, these effects appear to be long-term effects. However, no conclusions can be drawn on the objective long-term training effects. PMID:27935973
Berkeley UXO Discriminator (BUD)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gasperikova, Erika; Smith, J. Torquil; Morrison, H. Frank
2007-01-01
The Berkeley UXO Discriminator (BUD) is an optimally designed active electromagnetic system that not only detects but also characterizes UXO. The system incorporates three orthogonal transmitters and eight pairs of differenced receivers. it has two modes of operation: (1) search mode, in which BUD moves along a profile and exclusively detects targets in its vicinity, providing target depth and horizontal location, and (2) discrimination mode, in which BUD, stationary above a target, from a single position, determines three discriminating polarizability responses together with the object location and orientation. The performance of the system is governed by a target size-depth curve.more » Maximum detection depth is 1.5 m. While UXO objects have a single major polarizability coincident with the long axis of the object and two equal transverse polarizabilities, scrap metal has three different principal polarizabilities. The results clearly show that there are very clear distinctions between symmetric intact UXO and irregular scrap metal, and that BUD can resolve the intrinsic polarizabilities of the target. The field survey at the Yuma Proving Ground in Arizona showed excellent results within the predicted size-depth range.« less
2009-08-27
CAPE CANAVERAL, Fla. – The enclosed Space Tracking and Surveillance System – Demonstrators, or STSS-Demo, spacecraft moves out of the Astrotech payload processing facility. It is being moved to Cape Canaveral Air Force Station's Launch Pad 17-B. The STSS Demo is a space-based sensor component of a layered Ballistic Missile Defense System designed for the overall mission of detecting, tracking and discriminating ballistic missiles. STSS is capable of tracking objects after boost phase and provides trajectory information to other sensors. It will be launched by NASA for the Missile Defense Agency between 8 and 8:58 a.m. EDT Sept. 18. Approved for Public Release 09-MDA-04886 (10 SEPT 09) Photo credit: NASA/Jack Pfaller
Habeger, Jr., Charles C.; LaFond, Emmanuel F.; Brodeur, Pierre; Gerhardstein, Joseph P.
2002-01-01
The present invention provides a system and method to reduce motion-induced noise in the detection of ultrasonic signals in a moving sheet or body of material. An ultrasonic signal is generated in a sheet of material and a detection laser beam is moved along the surface of the material. By moving the detection laser in the same direction as the direction of movement of the sheet of material the amount of noise induced in the detection of the ultrasonic signal is reduced. The scanner is moved at approximately the same speed as the moving material. The system and method may be used for many applications, such in a paper making process or steel making process. The detection laser may be directed by a scanner. The movement of the scanner is synchronized with the anticipated arrival of the ultrasonic signal under the scanner. A photodetector may be used to determine when a ultrasonic pulse has been directed to the moving sheet of material so that the scanner may be synchronized the anticipated arrival of the ultrasonic signal.
A method of immediate detection of objects with a near-zero apparent motion in series of CCD-frames
NASA Astrophysics Data System (ADS)
Savanevych, V. E.; Khlamov, S. V.; Vavilova, I. B.; Briukhovetskyi, A. B.; Pohorelov, A. V.; Mkrtichian, D. E.; Kudak, V. I.; Pakuliak, L. K.; Dikov, E. N.; Melnik, R. G.; Vlasenko, V. P.; Reichart, D. E.
2018-01-01
The paper deals with a computational method for detection of the solar system minor bodies (SSOs), whose inter-frame shifts in series of CCD-frames during the observation are commensurate with the errors in measuring their positions. These objects have velocities of apparent motion between CCD-frames not exceeding three rms errors (3σ) of measurements of their positions. About 15% of objects have a near-zero apparent motion in CCD-frames, including the objects beyond the Jupiter's orbit as well as the asteroids heading straight to the Earth. The proposed method for detection of the object's near-zero apparent motion in series of CCD-frames is based on the Fisher f-criterion instead of using the traditional decision rules that are based on the maximum likelihood criterion. We analyzed the quality indicators of detection of the object's near-zero apparent motion applying statistical and in situ modeling techniques in terms of the conditional probability of the true detection of objects with a near-zero apparent motion. The efficiency of method being implemented as a plugin for the Collection Light Technology (CoLiTec) software for automated asteroids and comets detection has been demonstrated. Among the objects discovered with this plugin, there was the sungrazing comet C/2012 S1 (ISON). Within 26 min of the observation, the comet's image has been moved by three pixels in a series of four CCD-frames (the velocity of its apparent motion at the moment of discovery was equal to 0.8 pixels per CCD-frame; the image size on the frame was about five pixels). Next verification in observations of asteroids with a near-zero apparent motion conducted with small telescopes has confirmed an efficiency of the method even in bad conditions (strong backlight from the full Moon). So, we recommend applying the proposed method for series of observations with four or more frames.
The Pop out of Scene-Relative Object Movement against Retinal Motion Due to Self-Movement
ERIC Educational Resources Information Center
Rushton, Simon K.; Bradshaw, Mark F.; Warren, Paul A.
2007-01-01
An object that moves is spotted almost effortlessly; it "pops out." When the observer is stationary, a moving object is uniquely identified by retinal motion. This is not so when the observer is also moving; as the eye travels through space all scene objects change position relative to the eye producing a complicated field of retinal motion.…
Influence of contrast on spatial perception in TV display of moving images
NASA Astrophysics Data System (ADS)
Heising, H.
1981-09-01
A low cost visual simulation system was developed which involves a hybrid computer controlled transformation of perspective on a raster scan TV display. It is applicable to a wide range of simulation tasks, including training and research, but is especially useful in facilitating detection of moving objects and reducing frame rate in RPV applications with a number of advantages, e.g., reduction of bandwidth and improved protection against jamming. Because of the perspective transformation in TV raster scan, a change of contrast can occur during the display of moving images. Therefore, it is of interest to know the effect of this contrast change on human spatial perception. The investigations undertaken led to the conclusion that the physical contrast in the ratio range of l:ll to 1:25 (by a medium illuminance of 7 cd/sqm at the white parts of the picture) does not influence human distance and height judgments.
NASA Astrophysics Data System (ADS)
Zou, Tianhao; Zuo, Zhengrong
2018-02-01
Target detection is a very important and basic problem of computer vision and image processing. The most often case we meet in real world is a detection task for a moving-small target on moving platform. The commonly used methods, such as Registration-based suppression, can hardly achieve a desired result. To crack this hard nut, we introduce a Global-local registration based suppression method. Differ from the traditional ones, the proposed Global-local Registration Strategy consider both the global consistency and the local diversity of the background, obtain a better performance than normal background suppression methods. In this paper, we first discussed the features about the small-moving target detection on unstable platform. Then we introduced a new strategy and conducted an experiment to confirm its noisy stability. In the end, we confirmed the background suppression method based on global-local registration strategy has a better perform in moving target detection on moving platform.
Matched filter based detection of floating mines in IR spacetime
NASA Astrophysics Data System (ADS)
Borghgraef, Alexander; Lapierre, Fabian; Philips, Wilfried; Acheroy, Marc
2009-09-01
Ship-based automatic detection of small floating objects on an agitated sea surface remains a hard problem. Our main concern is the detection of floating mines, which proved a real threat to shipping in confined waterways during the first Gulf War, but applications include salvaging,search-and-rescue and perimeter or harbour defense. IR video was chosen for its day-and-night imaging capability, and its availability on military vessels. Detection is difficult because a rough sea is seen as a dynamic background of moving objects with size order, shape and temperature similar to those of the floating mine. We do find a determinant characteristic in the target's periodic motion, which differs from that of the propagating surface waves composing the background. The classical detection and tracking approaches give bad results when applied to this problem. While background detection algorithms assume a quasi-static background, the sea surface is actually very dynamic, causing this category of algorithms to fail. Kalman or particle filter algorithms on the other hand, which stress temporal coherence, suffer from tracking loss due to occlusions and the great noise level of the image. We propose an innovative approach. This approach uses the periodicity of the objects movement and thus its temporal coherence. The principle is to consider the video data as a spacetime volume similar to a hyperspectral data cube by replacing the spectral axis with a temporal axis. We can then apply algorithms developed for hyperspectral detection problems to the detection of small floating objects. We treat the detection problem using multilinear algebra, designing a number of finite impulse response filters (FIR) maximizing the target response. The algorithm was applied to test footage of practice mines in the infrared.
Near-Earth Object Orbit Linking with the Large Synoptic Survey Telescope
NASA Astrophysics Data System (ADS)
Vereš, Peter; Chesley, Steven R.
2017-07-01
We have conducted a detailed simulation of the ability of the Large Synoptic Survey Telescope (LSST) to link near-Earth and main belt asteroid detections into orbits. The key elements of the study were a high-fidelity detection model and the presence of false detections in the form of both statistical noise and difference image artifacts. We employed the Moving Object Processing System (MOPS) to generate tracklets, tracks, and orbits with a realistic detection density for one month of the LSST survey. The main goals of the study were to understand whether (a) the linking of near-Earth objects (NEOs) into orbits can succeed in a realistic survey, (b) the number of false tracks and orbits will be manageable, and (c) the accuracy of linked orbits would be sufficient for automated processing of discoveries and attributions. We found that the overall density of asteroids was more than 5000 per LSST field near opposition on the ecliptic, plus up to 3000 false detections per field in good seeing. We achieved 93.6% NEO linking efficiency for H< 22 on tracks composed of tracklets from at least three distinct nights within a 12 day interval. The derived NEO catalog was comprised of 96% correct linkages. Less than 0.1% of orbits included false detections, and the remainder of false linkages stemmed from main belt confusion, which was an artifact of the short time span of the simulation. The MOPS linking efficiency can be improved by refined attribution of detections to known objects and by improved tuning of the internal kd-tree linking algorithms.
NASA Astrophysics Data System (ADS)
Stogdale, Nick; Hollock, Steve; Johnson, Neil; Sumpter, Neil
2003-09-01
A 16x16 element un-cooled pyroelectric detector array has been developed which, when allied with advanced tracking and detection algorithms, has created a universal detector with multiple applications. Low-cost manufacturing techniques are used to fabricate a hybrid detector, intended for economic use in commercial markets. The detector has found extensive application in accurate people counting, detection, tracking, secure area protection, directional sensing and area violation; topics which are all pertinent to the provision of Homeland Security. The detection and tracking algorithms have, when allied with interpolation techniques, allowed a performance much higher than might be expected from a 16x16 array. This paper reviews the technology, with particular attention to the array structure, algorithms and interpolation techniques and outlines its application in a number of challenging market areas. Viewed from above, moving people are seen as 'hot blobs' moving through the field of view of the detector; background clutter or stationary objects are not seen and the detector works irrespective of lighting or environmental conditions. Advanced algorithms detect the people and extract size, shape, direction and velocity vectors allowing the number of people to be detected and their trajectories of motion to be tracked. Provision of virtual lines in the scene allows bi-directional counting of people flowing in and out of an entrance or area. Definition of a virtual closed area in the scene allows counting of the presence of stationary people within a defined area. Definition of 'counting lines' allows the counting of people, the ability to augment access control devices by confirming a 'one swipe one entry' judgement and analysis of the flow and destination of moving people. For example, passing the 'wrong way' up a denied passageway can be detected. Counting stationary people within a 'defined area' allows the behaviour and size of groups of stationary people to be analysed and counted, an alarm condition can also be generated when people stray into such areas.
Chiron and the Centaurs: Escapees from the Kuiper Belt
NASA Technical Reports Server (NTRS)
Stern, Alan; Campins, Humberto
1996-01-01
The outer Solar System has long appeared to be a largely empty place, inhabited only by the four giant planets, Pluto and a transient population of comets. In 1977 however, a faint and enigmatic object - 2060 Chiron - was discovered moving on a moderately inclined, strongly chaotic 51-year orbit which takes it from just inside Saturn's orbit out almost as far as that of Uranus. It was not initially clear from where Chiron originated. these objects become temporarily trapped on Centaur-like orbits Following Chiron's discovery, almost 15 years elapsed before other similar objects were discovered; five more have now been identified. Based on the detection statistics implied by these discoveries, it has become clear that these objects belong to a significant population of several hundred (or possibly several thousand) large icy bodies moving on relatively short-lived orbits between the giant planets. This new class of objects, known collectively as the Centaurs, are intermediate in diameter between typical comets (1-20 km) and small icy planets such as Pluto (approx. 2,300 km) and Triton (approx. 2,700 km). Although the Centaurs are interesting in their own right, they have taken on added significance following the recognition that they most probably originated in the ancient reservoir of comets and larger objects located beyond the orbit of Neptune known as the Kuiper belt.
Gaze movements and spatial working memory in collision avoidance: a traffic intersection task
Hardiess, Gregor; Hansmann-Roth, Sabrina; Mallot, Hanspeter A.
2013-01-01
Street crossing under traffic is an everyday activity including collision detection as well as avoidance of objects in the path of motion. Such tasks demand extraction and representation of spatio-temporal information about relevant obstacles in an optimized format. Relevant task information is extracted visually by the use of gaze movements and represented in spatial working memory. In a virtual reality traffic intersection task, subjects are confronted with a two-lane intersection where cars are appearing with different frequencies, corresponding to high and low traffic densities. Under free observation and exploration of the scenery (using unrestricted eye and head movements) the overall task for the subjects was to predict the potential-of-collision (POC) of the cars or to adjust an adequate driving speed in order to cross the intersection without collision (i.e., to find the free space for crossing). In a series of experiments, gaze movement parameters, task performance, and the representation of car positions within working memory at distinct time points were assessed in normal subjects as well as in neurological patients suffering from homonymous hemianopia. In the following, we review the findings of these experiments together with other studies and provide a new perspective of the role of gaze behavior and spatial memory in collision detection and avoidance, focusing on the following questions: (1) which sensory variables can be identified supporting adequate collision detection? (2) How do gaze movements and working memory contribute to collision avoidance when multiple moving objects are present and (3) how do they correlate with task performance? (4) How do patients with homonymous visual field defects (HVFDs) use gaze movements and working memory to compensate for visual field loss? In conclusion, we extend the theory of collision detection and avoidance in the case of multiple moving objects and provide a new perspective on the combined operation of external (bottom-up) and internal (top-down) cues in a traffic intersection task. PMID:23760667
Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion
Calabro, Finnegan J.; Vaina, Lucia Maria
2016-01-01
Background Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). Material/Methods 16 right handed healthy observers (ages 18–28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. Results Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. Conclusions These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion. PMID:27231114
Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion.
Calabro, Finnegan J; Vaina, Lucia Maria
2016-05-27
BACKGROUND Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). MATERIAL AND METHODS 16 right handed healthy observers (ages 18-28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. RESULTS Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. CONCLUSIONS These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion.
Determination of feature generation methods for PTZ camera object tracking
NASA Astrophysics Data System (ADS)
Doyle, Daniel D.; Black, Jonathan T.
2012-06-01
Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.
MUSIC algorithm DoA estimation for cooperative node location in mobile ad hoc networks
NASA Astrophysics Data System (ADS)
Warty, Chirag; Yu, Richard Wai; ElMahgoub, Khaled; Spinsante, Susanna
In recent years the technological development has encouraged several applications based on distributed communications network without any fixed infrastructure. The problem of providing a collaborative early warning system for multiple mobile nodes against a fast moving object. The solution is provided subject to system level constraints: motion of nodes, antenna sensitivity and Doppler effect at 2.4 GHz and 5.8 GHz. This approach consists of three stages. The first phase consists of detecting the incoming object using a highly directive two element antenna at 5.0 GHz band. The second phase consists of broadcasting the warning message using a low directivity broad antenna beam using 2× 2 antenna array which then in third phase will be detected by receiving nodes by using direction of arrival (DOA) estimation technique. The DOA estimation technique is used to estimate the range and bearing of the incoming nodes. The position of fast arriving object can be estimated using the MUSIC algorithm for warning beam DOA estimation. This paper is mainly intended to demonstrate the feasibility of early detection and warning system using a collaborative node to node communication links. The simulation is performed to show the behavior of detecting and broadcasting antennas as well as performance of the detection algorithm. The idea can be further expanded to implement commercial grade detection and warning system
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
[Research on Spectral Polarization Imaging System Based on Static Modulation].
Zhao, Hai-bo; Li, Huan; Lin, Xu-ling; Wang, Zheng
2015-04-01
The main disadvantages of traditional spectral polarization imaging system are: complex structure, with moving parts, low throughput. A novel method of spectral polarization imaging system is discussed, which is based on static polarization intensity modulation combined with Savart polariscope interference imaging. The imaging system can obtain real-time information of spectral and four Stokes polarization messages. Compared with the conventional methods, the advantages of the imaging system are compactness, low mass and no moving parts, no electrical control, no slit and big throughput. The system structure and the basic theory are introduced. The experimental system is established in the laboratory. The experimental system consists of reimaging optics, polarization intensity module, interference imaging module, and CCD data collecting and processing module. The spectral range is visible and near-infrared (480-950 nm). The white board and the plane toy are imaged by using the experimental system. The ability of obtaining spectral polarization imaging information is verified. The calibration system of static polarization modulation is set up. The statistical error of polarization degree detection is less than 5%. The validity and feasibility of the basic principle is proved by the experimental result. The spectral polarization data captured by the system can be applied to object identification, object classification and remote sensing detection.
2003-07-18
KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-B, Cape Canaveral Air Force Station, the first stage of a Delta II rocket is raised off the transporter before lifting and moving it into the mobile service tower. The rocket is being erected to launch the Space InfraRed Telescope Facility (SIRTF). Consisting of an 0.85-meter telescope and three cryogenically cooled science instruments, SIRTF is one of NASA's largest infrared telescopes to be launched. SIRTF will obtain images and spectra by detecting the infrared energy, or heat, radiated by objects in space. Most of this infrared radiation is blocked by the Earth's atmosphere and cannot be observed from the ground.
2003-07-18
KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-B, Cape Canaveral Air Force Station, the first stage of a Delta II rocket is raised off the transporter before lifting it up and moved into the mobile service tower. The rocket is being erected to launch the Space InfraRed Telescope Facility (SIRTF). Consisting of an 0.85-meter telescope and three cryogenically cooled science instruments, SIRTF is one of NASA's largest infrared telescopes to be launched. SIRTF will obtain images and spectra by detecting the infrared energy, or heat, radiated by objects in space. Most of this infrared radiation is blocked by the Earth's atmosphere and cannot be observed from the ground.
2003-07-18
KENNEDY SPACE CENTER, FLA. - On Launch Complex 17-B, Cape Canaveral Air Force Station, the first stage of a Delta II rocket waits to be lifted up and moved into the mobile service tower. The rocket is being erected to launch the Space InfraRed Telescope Facility (SIRTF). Consisting of an 0.85-meter telescope and three cryogenically cooled science instruments, SIRTF is one of NASA's largest infrared telescopes to be launched. SIRTF will obtain images and spectra by detecting the infrared energy, or heat, radiated by objects in space. Most of this infrared radiation is blocked by the Earth's atmosphere and cannot be observed from the ground.
2003-08-07
KENNEDY SPACE CENTER, FLA. - Workers at Hangar A&E, Cape Canaveral Air Force Station, lift the upper canister to move it to the Space Infrared Telescope Facility (SIRTF) at right. After encapsulation, the spacecraft will be transported to Launch Complex 17-B for mating with its launch vehicle, the Delta II rocket. SIRTF consists of three cryogenically cooled science instruments and an 0.85-meter telescope, and is one of NASA's largest infrared telescopes to be launched. SIRTF will obtain images and spectra by detecting the infrared energy, or heat, radiated by objects in space. Most of this infrared radiation is blocked by the Earth's atmosphere and cannot be observed from the ground.
Massive photometry of low-altitude artificial satellites on Mini-Mega-TORTORA
NASA Astrophysics Data System (ADS)
Karpov, S.; Katkova, E.; Beskin, G.; Biryukov, A.; Bondar, S.; Davydov, E.; Ivanov, E.; Perkov, A.; Sasyuk, V.
2016-12-01
The nine-channel Mini-Mega-TORTORA (MMT-9) optical wide-field monitoring system with high temporal resolution system is in operation since June 2014. The system has 0.1 s temporal resolution and effective detection limit around 10 mag (calibrated to V filter) for fast-moving objects on this timescale. In addition to its primary scientific operation, the system detects 200-500 tracks of satellites every night, both on low-altitude and high ellipticity orbits. Using these data we created and support the public database of photometric characteristics for these satellites, available online.
Hammad, Sofyan H. H.; Farina, Dario; Kamavuako, Ernest N.; Jensen, Winnie
2013-01-01
Invasive brain–computer interfaces (BCIs) may prove to be a useful rehabilitation tool for severely disabled patients. Although some systems have shown to work well in restricted laboratory settings, their usefulness must be tested in less controlled environments. Our objective was to investigate if a specific motor task could reliably be detected from multi-unit intra-cortical signals from freely moving animals. Four rats were trained to hit a retractable paddle (defined as a “hit”). Intra-cortical signals were obtained from electrodes placed in the primary motor cortex. First, the signal-to-noise ratio was increased by wavelet denoising. Action potentials were then detected using an adaptive threshold, counted in three consecutive time intervals and were used as features to classify either a “hit” or a “no-hit” (defined as an interval between two “hits”). We found that a “hit” could be detected with an accuracy of 75 ± 6% when wavelet denoising was applied whereas the accuracy dropped to 62 ± 5% without prior denoising. We compared our approach with the common daily practice in BCI that consists of using a fixed, manually selected threshold for spike detection without denoising. The results showed the feasibility of detecting a motor task in a less restricted environment than commonly applied within invasive BCI research. PMID:24298254
Vaina, Lucia M.; Buonanno, Ferdinando; Rushton, Simon K.
2014-01-01
Background All contemporary models of perception of locomotor heading from optic flow (the characteristic patterns of retinal motion that result from self-movement) begin with relative motion. Therefore it would be expected that an impairment on perception of relative motion should impact on the ability to judge heading and other 3D motion tasks. Material/Methods We report two patients with occipital lobe lesions whom we tested on a battery of motion tasks. Patients were impaired on all tests that involved relative motion in plane (motion discontinuity, form from differences in motion direction or speed). Despite this they retained the ability to judge their direction of heading relative to a target. A potential confound is that observers can derive information about heading from scale changes bypassing the need to use optic flow. Therefore we ran further experiments in which we isolated optic flow and scale change. Results Patients’ performance was in normal ranges on both tests. The finding that ability to perceive heading can be retained despite an impairment on ability to judge relative motion questions the assumption that heading perception proceeds from initial processing of relative motion. Furthermore, on a collision detection task, SS and SR’s performance was significantly better for simulated forward movement of the observer in the 3D scene, than for the static observer. This suggests that in spite of severe deficits on relative motion in the frontoparlel (xy) plane, information from self-motion helped identification objects moving along an intercept 3D relative motion trajectory. Conclusions This result suggests a potential use of a flow parsing strategy to detect in a 3D world the trajectory of moving objects when the observer is moving forward. These results have implications for developing rehabilitation strategies for deficits in visually guided navigation. PMID:25183375
Detection and tracking of drones using advanced acoustic cameras
NASA Astrophysics Data System (ADS)
Busset, Joël.; Perrodin, Florian; Wellig, Peter; Ott, Beat; Heutschi, Kurt; Rühl, Torben; Nussbaumer, Thomas
2015-10-01
Recent events of drones flying over city centers, official buildings and nuclear installations stressed the growing threat of uncontrolled drone proliferation and the lack of real countermeasure. Indeed, detecting and tracking them can be difficult with traditional techniques. A system to acoustically detect and track small moving objects, such as drones or ground robots, using acoustic cameras is presented. The described sensor, is completely passive, and composed of a 120-element microphone array and a video camera. The acoustic imaging algorithm determines in real-time the sound power level coming from all directions, using the phase of the sound signals. A tracking algorithm is then able to follow the sound sources. Additionally, a beamforming algorithm selectively extracts the sound coming from each tracked sound source. This extracted sound signal can be used to identify sound signatures and determine the type of object. The described techniques can detect and track any object that produces noise (engines, propellers, tires, etc). It is a good complementary approach to more traditional techniques such as (i) optical and infrared cameras, for which the object may only represent few pixels and may be hidden by the blooming of a bright background, and (ii) radar or other echo-localization techniques, suffering from the weakness of the echo signal coming back to the sensor. The distance of detection depends on the type (frequency range) and volume of the noise emitted by the object, and on the background noise of the environment. Detection range and resilience to background noise were tested in both, laboratory environments and outdoor conditions. It was determined that drones can be tracked up to 160 to 250 meters, depending on their type. Speech extraction was also experimentally investigated: the speech signal of a person being 80 to 100 meters away can be captured with acceptable speech intelligibility.
Camplani, M; Malizia, A; Gelfusa, M; Barbato, F; Antonelli, L; Poggi, L A; Ciparisse, J F; Salgado, L; Richetta, M; Gaudio, P
2016-01-01
In this paper, a preliminary shadowgraph-based analysis of dust particles re-suspension due to loss of vacuum accident (LOVA) in ITER-like nuclear fusion reactors has been presented. Dust particles are produced through different mechanisms in nuclear fusion devices, one of the main issues is that dust particles are capable of being re-suspended in case of events such as LOVA. Shadowgraph is based on an expanded collimated beam of light emitted by a laser or a lamp that emits light transversely compared to the flow field direction. In the STARDUST facility, the dust moves in the flow, and it causes variations of refractive index that can be detected by using a CCD camera. The STARDUST fast camera setup allows to detect and to track dust particles moving in the vessel and then to obtain information about the velocity field of dust mobilized. In particular, the acquired images are processed such that per each frame the moving dust particles are detected by applying a background subtraction technique based on the mixture of Gaussian algorithm. The obtained foreground masks are eventually filtered with morphological operations. Finally, a multi-object tracking algorithm is used to track the detected particles along the experiment. For each particle, a Kalman filter-based tracker is applied; the particles dynamic is described by taking into account position, velocity, and acceleration as state variable. The results demonstrate that it is possible to obtain dust particles' velocity field during LOVA by automatically processing the data obtained with the shadowgraph approach.
NASA Astrophysics Data System (ADS)
Camplani, M.; Malizia, A.; Gelfusa, M.; Barbato, F.; Antonelli, L.; Poggi, L. A.; Ciparisse, J. F.; Salgado, L.; Richetta, M.; Gaudio, P.
2016-01-01
In this paper, a preliminary shadowgraph-based analysis of dust particles re-suspension due to loss of vacuum accident (LOVA) in ITER-like nuclear fusion reactors has been presented. Dust particles are produced through different mechanisms in nuclear fusion devices, one of the main issues is that dust particles are capable of being re-suspended in case of events such as LOVA. Shadowgraph is based on an expanded collimated beam of light emitted by a laser or a lamp that emits light transversely compared to the flow field direction. In the STARDUST facility, the dust moves in the flow, and it causes variations of refractive index that can be detected by using a CCD camera. The STARDUST fast camera setup allows to detect and to track dust particles moving in the vessel and then to obtain information about the velocity field of dust mobilized. In particular, the acquired images are processed such that per each frame the moving dust particles are detected by applying a background subtraction technique based on the mixture of Gaussian algorithm. The obtained foreground masks are eventually filtered with morphological operations. Finally, a multi-object tracking algorithm is used to track the detected particles along the experiment. For each particle, a Kalman filter-based tracker is applied; the particles dynamic is described by taking into account position, velocity, and acceleration as state variable. The results demonstrate that it is possible to obtain dust particles' velocity field during LOVA by automatically processing the data obtained with the shadowgraph approach.
NASA Astrophysics Data System (ADS)
Gao, Shibo; Cheng, Yongmei; Song, Chunhua
2013-09-01
The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.
Volumetric Security Alarm Based on a Spherical Ultrasonic Transducer Array
NASA Astrophysics Data System (ADS)
Sayin, Umut; Scaini, Davide; Arteaga, Daniel
Most of the existent alarm systems depend on physical or visual contact. The detection area is often limited depending on the type of the transducer, creating blind spots. Our proposition is a truly volumetric alarm system that can detect any movement in the intrusion area, based on monitoring the change over time of the impulse response of the room, which acts as an acoustic footprint. The device depends on an omnidirectional ultrasonic transducer array emitting sweep signals to calculate the impulse response in short intervals. Any change in the room conditions is monitored through a correlation function. The sensitivity of the alarm to different objects and different environments depends on the sweep duration, sweep bandwidth, and sweep interval. Successful detection of intrusions also depends on the size of the monitoring area and requires an adjustment of emitted ultrasound power. Strong air flow affects the performance of the alarm. A method for separating moving objects from strong air flow is devised using an adaptive thresholding on the correlation function involving a series of impulse response measurements. The alarm system can be also used for fire detection since air flow sourced from heating objects differ from random nature of the present air flow. Several measurements are made to test the integrity of the alarm in rooms sizing from 834-2080m3 with irregular geometries and various objects. The proposed system can efficiently detect intrusion whilst adequate emitting power is provided.
Robot acting on moving bodies (RAMBO): Preliminary results
NASA Technical Reports Server (NTRS)
Davis, Larry S.; Dementhon, Daniel; Bestul, Thor; Ziavras, Sotirios; Srinivasan, H. V.; Siddalingaiah, Madju; Harwood, David
1989-01-01
A robot system called RAMBO is being developed. It is equipped with a camera, which, given a sequence of simple tasks, can perform these tasks on a moving object. RAMBO is given a complete geometric model of the object. A low level vision module extracts and groups characteristic features in images of the object. The positions of the object are determined in a sequence of images, and a motion estimate of the object is obtained. This motion estimate is used to plan trajectories of the robot tool to relative locations nearby the object sufficient for achieving the tasks. More specifically, low level vision uses parallel algorithms for image enchancement by symmetric nearest neighbor filtering, edge detection by local gradient operators, and corner extraction by sector filtering. The object pose estimation is a Hough transform method accumulating position hypotheses obtained by matching triples of image features (corners) to triples of model features. To maximize computing speed, the estimate of the position in space of a triple of features is obtained by decomposing its perspective view into a product of rotations and a scaled orthographic projection. This allows the use of 2-D lookup tables at each stage of the decomposition. The position hypotheses for each possible match of model feature triples and image feature triples are calculated in parallel. Trajectory planning combines heuristic and dynamic programming techniques. Then trajectories are created using parametric cubic splines between initial and goal trajectories. All the parallel algorithms run on a Connection Machine CM-2 with 16K processors.
An Aggregated Method for Determining Railway Defects and Obstacle Parameters
NASA Astrophysics Data System (ADS)
Loktev, Daniil; Loktev, Alexey; Stepanov, Roman; Pevzner, Viktor; Alenov, Kanat
2018-03-01
The method of combining algorithms of image blur analysis and stereo vision to determine the distance to objects (including external defects of railway tracks) and the speed of moving objects-obstacles is proposed. To estimate the deviation of the distance depending on the blur a statistical approach, logarithmic, exponential and linear standard functions are used. The statistical approach includes a method of estimating least squares and the method of least modules. The accuracy of determining the distance to the object, its speed and direction of movement is obtained. The paper develops a method of determining distances to objects by analyzing a series of images and assessment of depth using defocusing using its aggregation with stereoscopic vision. This method is based on a physical effect of dependence on the determined distance to the object on the obtained image from the focal length or aperture of the lens. In the calculation of the blur spot diameter it is assumed that blur occurs at the point equally in all directions. According to the proposed approach, it is possible to determine the distance to the studied object and its blur by analyzing a series of images obtained using the video detector with different settings. The article proposes and scientifically substantiates new and improved existing methods for detecting the parameters of static and moving objects of control, and also compares the results of the use of various methods and the results of experiments. It is shown that the aggregate method gives the best approximation to the real distances.
Streak detection and analysis pipeline for space-debris optical images
NASA Astrophysics Data System (ADS)
Virtanen, Jenni; Poikonen, Jonne; Säntti, Tero; Komulainen, Tuomo; Torppa, Johanna; Granvik, Mikael; Muinonen, Karri; Pentikäinen, Hanna; Martikainen, Julia; Näränen, Jyri; Lehti, Jussi; Flohrer, Tim
2016-04-01
We describe a novel data-processing and analysis pipeline for optical observations of moving objects, either of natural (asteroids, meteors) or artificial origin (satellites, space debris). The monitoring of the space object populations requires reliable acquisition of observational data, to support the development and validation of population models and to build and maintain catalogues of orbital elements. The orbital catalogues are, in turn, needed for the assessment of close approaches (for asteroids, with the Earth; for satellites, with each other) and for the support of contingency situations or launches. For both types of populations, there is also increasing interest to detect fainter objects corresponding to the small end of the size distribution. The ESA-funded StreakDet (streak detection and astrometric reduction) activity has aimed at formulating and discussing suitable approaches for the detection and astrometric reduction of object trails, or streaks, in optical observations. Our two main focuses are objects in lower altitudes and space-based observations (i.e., high angular velocities), resulting in long (potentially curved) and faint streaks in the optical images. In particular, we concentrate on single-image (as compared to consecutive frames of the same field) and low-SNR detection of objects. Particular attention has been paid to the process of extraction of all necessary information from one image (segmentation), and subsequently, to efficient reduction of the extracted data (classification). We have developed an automated streak detection and processing pipeline and demonstrated its performance with an extensive database of semisynthetic images simulating streak observations both from ground-based and space-based observing platforms. The average processing time per image is about 13 s for a typical 2k-by-2k image. For long streaks (length >100 pixels), primary targets of the pipeline, the detection sensitivity (true positives) is about 90% for both scenarios for the bright streaks (SNR > 1), while in the low-SNR regime, the sensitivity is still 50% at SNR = 0.5 .
Self-motion impairs multiple-object tracking.
Thomas, Laura E; Seiffert, Adriane E
2010-10-01
Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement impairs the ability to keep track of other moving objects. Participants attempted to track multiple targets while either moving around the tracking area or remaining in a fixed location. Participants' tracking performance was impaired when they moved to a new location during tracking, even when they were passively moved and when they did not see a shift in viewpoint. Self-motion impaired multiple-object tracking in both an immersive virtual environment and a real-world analog, but did not interfere with a difficult non-spatial tracking task. These results suggest that people use a common mechanism to track changes both to the location of moving objects around them and to keep track of their own location. Copyright 2010 Elsevier B.V. All rights reserved.
Norman, J Farley; Bartholomew, Ashley N; Burton, Cory L
2008-09-01
A single experiment investigated how younger (aged 18-32 years) and older (aged 62-82 years) observers perceive 3D object shape from deforming and static boundary contours. On any given trial, observers were shown two smoothly-curved objects, similar to water-smoothed granite rocks, and were required to judge whether they possessed the "same" or "different" shape. The objects presented during the "different" trials produced differently-shaped boundary contours. The objects presented during the "same" trials also produced different boundary contours, because one of the objects was always rotated in depth relative to the other by 5, 25, or 45 degrees. Each observer participated in 12 experimental conditions formed by the combination of 2 motion types (deforming vs. static boundary contours), 2 surface types (objects depicted as silhouettes or with texture and Lambertian shading), and 3 angular offsets (5, 25, and 45 degrees). When there was no motion (static silhouettes or stationary objects presented with shading and texture), the older observers performed as well as the younger observers. In the moving object conditions with shading and texture, the older observers' performance was facilitated by the motion, but the amount of this facilitation was reduced relative to that exhibited by the younger observers. In contrast, the older observers obtained no benefit in performance at all from the deforming (i.e., moving) silhouettes. The reduced ability of older observers to perceive 3D shape from motion is probably due to a low-level deterioration in the ability to detect and discriminate motion itself.
Dosimetry of heavy ions by use of CCD detectors
NASA Technical Reports Server (NTRS)
Schott, J. U.
1994-01-01
The design and the atomic composition of Charge Coupled Devices (CCD's) make them unique for investigations of single energetic particle events. As detector system for ionizing particles they detect single particles with local resolution and near real time particle tracking. In combination with its properties as optical sensor, particle transversals of single particles are to be correlated to any objects attached to the light sensitive surface of the sensor by simple imaging of their shadow and subsequent image analysis of both, optical image and particle effects, observed in affected pixels. With biological objects it is possible for the first time to investigate effects of single heavy ions in tissue or extinguished organs of metabolizing (i.e. moving) systems with a local resolution better than 15 microns. Calibration data for particle detection in CCD's are presented for low energetic protons and heavy ions.
3-Dimensional Scene Perception during Active Electrolocation in a Weakly Electric Pulse Fish
von der Emde, Gerhard; Behr, Katharina; Bouton, Béatrice; Engelmann, Jacob; Fetz, Steffen; Folde, Caroline
2010-01-01
Weakly electric fish use active electrolocation for object detection and orientation in their environment even in complete darkness. The African mormyrid Gnathonemus petersii can detect object parameters, such as material, size, shape, and distance. Here, we tested whether individuals of this species can learn to identify 3-dimensional objects independently of the training conditions and independently of the object's position in space (rotation-invariance; size-constancy). Individual G. petersii were trained in a two-alternative forced-choice procedure to electrically discriminate between a 3-dimensional object (S+) and several alternative objects (S−). Fish were then tested whether they could identify the S+ among novel objects and whether single components of S+ were sufficient for recognition. Size-constancy was investigated by presenting the S+ together with a larger version at different distances. Rotation-invariance was tested by rotating S+ and/or S− in 3D. Our results show that electrolocating G. petersii could (1) recognize an object independently of the S− used during training. When only single components of a complex S+ were offered, recognition of S+ was more or less affected depending on which part was used. (2) Object-size was detected independently of object distance, i.e. fish showed size-constancy. (3) The majority of the fishes tested recognized their S+ even if it was rotated in space, i.e. these fishes showed rotation-invariance. (4) Object recognition was restricted to the near field around the fish and failed when objects were moved more than about 4 cm away from the animals. Our results indicate that even in complete darkness our G. petersii were capable of complex 3-dimensional scene perception using active electrolocation. PMID:20577635
Spacewatch discovery of near-Earth asteroids
NASA Technical Reports Server (NTRS)
Gehrels, Tom
1992-01-01
Our overall scientific goal is to survey the solar system to completion - that is, to find the various populations and to study their statistics, interrelations, and origins. The practical benefit to SERC is that we are finding Earth-approaching asteroids that are accessible for mining. Our system can detect Earth-approachers in the 1-km size range even when they are far away, and can detect smaller objects when they are moving rapidly past Earth. Until Spacewatch, the size range of 6-300 meters in diameter for the near-Earth asteroids was unexplored. This important region represents the transition between the meteorites and the larger observed near-Earth asteroids. One of our Spacewatch discoveries, 1991 VG, may be representative of a new orbital class of object. If it is really a natural object, and not man-made, its orbital parameters are closer to those of the Earth than we have seen before; its delta V is the lowest of all objects known thus far. We may expect new discoveries as we continue our surveying, with fine-tuning of the techniques.
Effects of a Moving Distractor Object on Time-to-Contact Judgments
ERIC Educational Resources Information Center
Oberfeld, Daniel; Hecht, Heiko
2008-01-01
The effects of moving task-irrelevant objects on time-to-contact (TTC) judgments were examined in 5 experiments. Observers viewed a directly approaching target in the presence of a distractor object moving in parallel with the target. In Experiments 1 to 4, observers decided whether the target would have collided with them earlier or later than a…
Ultralow-dose, feedback imaging with laser-Compton X-ray and laser-Compton gamma ray sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barty, Christopher P. J.
Ultralow-dose, x-ray or gamma-ray imaging is based on fast, electronic control of the output of a laser-Compton x-ray or gamma-ray source (LCXS or LCGS). X-ray or gamma-ray shadowgraphs are constructed one (or a few) pixel(s) at a time by monitoring the LCXS or LCGS beam energy required at each pixel of the object to achieve a threshold level of detectability at the detector. An example provides that once the threshold for detection is reached, an electronic or optical signal is sent to the LCXS/LCGS that enables a fast optical switch that diverts, either in space or time the laser pulsesmore » used to create Compton photons. In this way, one prevents the object from being exposed to any further Compton x-rays or gamma-rays until either the laser-Compton beam or the object are moved so that a new pixel location may be illumination.« less
Remote sensing using MIMO systems
Bikhazi, Nicolas; Young, William F; Nguyen, Hung D
2015-04-28
A technique for sensing a moving object within a physical environment using a MIMO communication link includes generating a channel matrix based upon channel state information of the MIMO communication link. The physical environment operates as a communication medium through which communication signals of the MIMO communication link propagate between a transmitter and a receiver. A spatial information variable is generated for the MIMO communication link based on the channel matrix. The spatial information variable includes spatial information about the moving object within the physical environment. A signature for the moving object is generated based on values of the spatial information variable accumulated over time. The moving object is identified based upon the signature.
Presentation of a large amount of moving objects in a virtual environment
NASA Astrophysics Data System (ADS)
Ye, Huanzhuo; Gong, Jianya; Ye, Jing
2004-05-01
It needs a lot of consideration to manage the presentation of a large amount of moving objects in virtual environment. Motion state model (MSM) is used to represent the motion of objects and 2n tree is used to index the motion data stored in database or files. To minimize the necessary memory occupation for static models, cache with LRU or FIFO refreshing is introduced. DCT and wavelet work well with different playback speeds of motion presentation because they can filter low frequencies from motion data and adjust the filter according to playback speed. Since large amount of data are continuously retrieved, calculated, used for displaying, and then discarded, multithreading technology is naturally employed though single thread with carefully arranged data retrieval also works well when the number of objects is not very big. With multithreading, the level of concurrence should be placed at data retrieval, where waiting may occur, rather than at calculating or displaying, and synchronization should be carefully arranged to make sure that different threads can collaborate well. Collision detection is not needed when playing with history data and sampled current data; however, it is necessary for spatial state prediction. When the current state is presented, either predicting-adjusting method or late updating method could be used according to the users' preference.
Perceptual impressions of causality are affected by common fate.
White, Peter A
2017-03-24
Many studies of perceptual impressions of causality have used a stimulus in which a moving object (the launcher) contacts a stationary object (the target) and the latter then moves off. Such stimuli give rise to an impression that the launcher makes the target move. In the present experiments, instead of a single target object, an array of four vertically aligned objects was used. The launcher contacted none of them, but stopped at a point between the two central objects. The four objects then moved with similar motion properties, exhibiting the Gestalt property of common fate. Strong impressions of causality were reported for this stimulus. It is argued that the array of four objects was perceived, by the likelihood principle, as a single object with some parts unseen, that the launcher was perceived as contacting one of the unseen parts of this object, and that the causal impression resulted from that. Supporting that argument, stimuli in which kinematic features were manipulated so as to weaken or eliminate common fate yielded weaker impressions of causality.
Moving Object Localization Based on UHF RFID Phase and Laser Clustering
Fu, Yulu; Wang, Changlong; Liang, Gaoli; Zhang, Hua; Ur Rehman, Shafiq
2018-01-01
RFID (Radio Frequency Identification) offers a way to identify objects without any contact. However, positioning accuracy is limited since RFID neither provides distance nor bearing information about the tag. This paper proposes a new and innovative approach for the localization of moving object using a particle filter by incorporating RFID phase and laser-based clustering from 2d laser range data. First of all, we calculate phase-based velocity of the moving object based on RFID phase difference. Meanwhile, we separate laser range data into different clusters, and compute the distance-based velocity and moving direction of these clusters. We then compute and analyze the similarity between two velocities, and select K clusters having the best similarity score. We predict the particles according to the velocity and moving direction of laser clusters. Finally, we update the weights of the particles based on K clusters and achieve the localization of moving objects. The feasibility of this approach is validated on a Scitos G5 service robot and the results prove that we have successfully achieved a localization accuracy up to 0.25 m. PMID:29522458
JPRS report: Science and technology. Central Eurasia
NASA Astrophysics Data System (ADS)
1994-05-01
Translated articles cover the following topics: optimal systems to detect and classify moving objects; multiple identification of optical readings in multisensor information and measurement system; method of first integrals in synthesis of optimal control; study of the development of turbulence in the region of a break above a triangular wing; electroerosion machining in aviation engine construction; and cumulation of a flat shock wave in a tube by a thin parietal gas layer of lower density.
Tracking Objects with Networked Scattered Directional Sensors
NASA Astrophysics Data System (ADS)
Plarre, Kurt; Kumar, P. R.
2007-12-01
We study the problem of object tracking using highly directional sensors—sensors whose field of vision is a line or a line segment. A network of such sensors monitors a certain region of the plane. Sporadically, objects moving in straight lines and at a constant speed cross the region. A sensor detects an object when it crosses its line of sight, and records the time of the detection. No distance or angle measurements are available. The task of the sensors is to estimate the directions and speeds of the objects, and the sensor lines, which are unknown a priori. This estimation problem involves the minimization of a highly nonconvex cost function. To overcome this difficulty, we introduce an algorithm, which we call "adaptive basis algorithm." This algorithm is divided into three phases: in the first phase, the algorithm is initialized using data from six sensors and four objects; in the second phase, the estimates are updated as data from more sensors and objects are incorporated. The third phase is an optional coordinated transformation. The estimation is done in an "ad-hoc" coordinate system, which we call "adaptive coordinate system." When more information is available, for example, the location of six sensors, the estimates can be transformed to the "real-world" coordinate system. This constitutes the third phase.
Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study
NASA Astrophysics Data System (ADS)
Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad
2018-01-01
The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.
Optical system for object detection and delineation in space
NASA Astrophysics Data System (ADS)
Handelman, Amir; Shwartz, Shoam; Donitza, Liad; Chaplanov, Loran
2018-01-01
Object recognition and delineation is an important task in many environments, such as in crime scenes and operating rooms. Marking evidence or surgical tools and attracting the attention of the surrounding staff to the marked objects can affect people's lives. We present an optical system comprising a camera, computer, and small laser projector that can detect and delineate objects in the environment. To prove the optical system's concept, we show that it can operate in a hypothetical crime scene in which a pistol is present and automatically recognize and segment it by various computer-vision algorithms. Based on such segmentation, the laser projector illuminates the actual boundaries of the pistol and thus allows the persons in the scene to comfortably locate and measure the pistol without holding any intermediator device, such as an augmented reality handheld device, glasses, or screens. Using additional optical devices, such as diffraction grating and a cylinder lens, the pistol size can be estimated. The exact location of the pistol in space remains static, even after its removal. Our optical system can be fixed or dynamically moved, making it suitable for various applications that require marking of objects in space.
An Investigation of Automatic Change Detection for Topographic Map Updating
NASA Astrophysics Data System (ADS)
Duncan, P.; Smit, J.
2012-08-01
Changes to the landscape are constantly occurring and it is essential for geospatial and mapping organisations that these changes are regularly detected and captured, so that map databases can be updated to reflect the current status of the landscape. The Chief Directorate of National Geospatial Information (CD: NGI), South Africa's national mapping agency, currently relies on manual methods of detecting changes and capturing these changes. These manual methods are time consuming and labour intensive, and rely on the skills and interpretation of the operator. It is therefore necessary to move towards more automated methods in the production process at CD: NGI. The aim of this research is to do an investigation into a methodology for automatic or semi-automatic change detection for the purpose of updating topographic databases. The method investigated for detecting changes is through image classification as well as spatial analysis and is focussed on urban landscapes. The major data input into this study is high resolution aerial imagery and existing topographic vector data. Initial results indicate the traditional pixel-based image classification approaches are unsatisfactory for large scale land-use mapping and that object-orientated approaches hold more promise. Even in the instance of object-oriented image classification generalization of techniques on a broad-scale has provided inconsistent results. A solution may lie with a hybrid approach of pixel and object-oriented techniques.
Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis
Thomaz, Lucas A.; Jardim, Eric; da Silva, Allan F.; ...
2017-10-16
This study presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However,more » this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario.« less
Anomaly Detection in Moving-Camera Video Sequences Using Principal Subspace Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomaz, Lucas A.; Jardim, Eric; da Silva, Allan F.
This study presents a family of algorithms based on sparse decompositions that detect anomalies in video sequences obtained from slow moving cameras. These algorithms start by computing the union of subspaces that best represents all the frames from a reference (anomaly free) video as a low-rank projection plus a sparse residue. Then, they perform a low-rank representation of a target (possibly anomalous) video by taking advantage of both the union of subspaces and the sparse residue computed from the reference video. Such algorithms provide good detection results while at the same time obviating the need for previous video synchronization. However,more » this is obtained at the cost of a large computational complexity, which hinders their applicability. Another contribution of this paper approaches this problem by using intrinsic properties of the obtained data representation in order to restrict the search space to the most relevant subspaces, providing computational complexity gains of up to two orders of magnitude. The developed algorithms are shown to cope well with videos acquired in challenging scenarios, as verified by the analysis of 59 videos from the VDAO database that comprises videos with abandoned objects in a cluttered industrial scenario.« less
Modeling job sites in real time to improve safety during equipment operation
NASA Astrophysics Data System (ADS)
Caldas, Carlos H.; Haas, Carl T.; Liapi, Katherine A.; Teizer, Jochen
2006-03-01
Real-time three-dimensional (3D) modeling of work zones has received an increasing interest to perform equipment operation faster, safer and more precisely. In addition, hazardous job site environment like they exist on construction sites ask for new devices which can rapidly and actively model static and dynamic objects. Flash LADAR (Laser Detection and Ranging) cameras are one of the recent technology developments which allow rapid spatial data acquisition of scenes. Algorithms that can process and interpret the output of such enabling technologies into threedimensional models have the potential to significantly improve work processes. One particular important application is modeling the location and path of objects in the trajectory of heavy construction equipment navigation. Detecting and mapping people, materials and equipment into a three-dimensional computer model allows analyzing the location, path, and can limit or restrict access to hazardous areas. This paper presents experiments and results of a real-time three-dimensional modeling technique to detect static and moving objects within the field of view of a high-frame update rate laser range scanning device. Applications related to heavy equipment operations on transportation and construction job sites are specified.
Smart sensing surveillance video system
NASA Astrophysics Data System (ADS)
Hsu, Charles; Szu, Harold
2016-05-01
An intelligent video surveillance system is able to detect and identify abnormal and alarming situations by analyzing object movement. The Smart Sensing Surveillance Video (S3V) System is proposed to minimize video processing and transmission, thus allowing a fixed number of cameras to be connected on the system, and making it suitable for its applications in remote battlefield, tactical, and civilian applications including border surveillance, special force operations, airfield protection, perimeter and building protection, and etc. The S3V System would be more effective if equipped with visual understanding capabilities to detect, analyze, and recognize objects, track motions, and predict intentions. In addition, alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. The S3V System capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.
Acoustic Event Detection and Classification
NASA Astrophysics Data System (ADS)
Temko, Andrey; Nadeu, Climent; Macho, Dušan; Malkin, Robert; Zieger, Christian; Omologo, Maurizio
The human activity that takes place in meeting rooms or classrooms is reflected in a rich variety of acoustic events (AE), produced either by the human body or by objects handled by humans, so the determination of both the identity of sounds and their position in time may help to detect and describe that human activity. Indeed, speech is usually the most informative sound, but other kinds of AEs may also carry useful information, for example, clapping or laughing inside a speech, a strong yawn in the middle of a lecture, a chair moving or a door slam when the meeting has just started. Additionally, detection and classification of sounds other than speech may be useful to enhance the robustness of speech technologies like automatic speech recognition.
Affordable Wide-field Optical Space Surveillance using sCMOS and GPUs
NASA Astrophysics Data System (ADS)
Zimmer, P.; McGraw, J.; Ackermann, M.
2016-09-01
Recent improvements in sCMOS technology allow for affordable, wide-field, and rapid cadence surveillance from LEO to out past GEO using largely off-the-shelf hardware. sCMOS sensors, until very recently, suffered from several shortcomings when compared to CCD sensors - lower sensitivity, smaller physical size and less predictable noise characteristics. Sensors that overcome the first two of these are now available commercially and the principals at J.T. McGraw and Associates (JTMA) have developed observing strategies that minimize the impact of the third, while leveraging the key features of sCMOS, fast readout and low average readout noise. JTMA has integrated a new generation sCMOS sensor into an existing COTS telescope system in order to develop and test new detection techniques designed for uncued optical surveillance across a wide range of apparent object angular rates - from degree per second scale of LEO objects to a few arcseconds per second for objects out past GEO. One further complication arises from this: increased useful frame rate means increased data volume. Fortunately, GPU technology continues to advance at a breakneck pace and we report on the results and performance of our new detection techniques implemented on new generation GPUs. Early results show significance within 20% of the expected theoretical limiting signal-to-noise using commodity GPUs in near real time across a wide range of object parameters, closing the gap in detectivity between moving objects and tracked objects.
A novel snapshot polarimetric imager
NASA Astrophysics Data System (ADS)
Wong, Gerald; McMaster, Ciaran; Struthers, Robert; Gorman, Alistair; Sinclair, Peter; Lamb, Robert; Harvey, Andrew R.
2012-10-01
Polarimetric imaging (PI) is of increasing importance in determining additional scene information beyond that of conventional images. For very long-range surveillance, image quality is degraded due to turbulence. Furthermore, the high magnification required to create images with sufficient spatial resolution suitable for object recognition and identification require long focal length optical systems. These are incompatible with the size and weight restrictions for aircraft. Techniques which allow detection and recognition of an object at the single pixel level are therefore likely to provide advance warning of approaching threats or long-range object cueing. PI is a technique that has the potential to detect object signatures at the pixel level. Early attempts to develop PI used rotating polarisers (and spectral filters) which recorded sequential polarized images from which the complete Stokes matrix could be derived. This approach has built-in latency between frames and requires accurate registration of consecutive frames to analyze real-time video of moving objects. Alternatively, multiple optical systems and cameras have been demonstrated to remove latency, but this approach increases cost and bulk of the imaging system. In our investigation we present a simplified imaging system that divides an image into two orthogonal polarimetric components which are then simultaneously projected onto a single detector array. Thus polarimetric data is recorded without latency on a single snapshot. We further show that, for pixel-level objects, the data derived from only two orthogonal states (H and V) is sufficient to increase the probability of detection whilst reducing false alarms compared to conventional unpolarised imaging.
Automated Detection of Small Bodies by Space Based Observation
NASA Astrophysics Data System (ADS)
Bidstrup, P. R.; Grillmayer, G.; Andersen, A. C.; Haack, H.; Jorgensen, J. L.
The number of known comets and asteroids is increasing every year. Up till now this number is including approximately 250,000 of the largest minor planets, as they are usually referred. These discoveries are due to the Earth-based observation which has intensified over the previous decades. Additionally larger telescopes and arrays of telescopes are being used for exploring our Solar System. It is believed that all near- Earth and Main-Belt asteroids of diameters above 10 to 30 km have been discovered, leaving these groups of objects as observationally complete. However, the cataloguing of smaller bodies is incomplete as only a very small fraction of the expected number has been discovered. It is estimated that approximately 1010 main belt asteroids in the size range 1 m to 1 km are too faint to be observed using Earth-based telescopes. In order to observe these small bodies, space-based search must be initiated to remove atmospheric disturbances and to minimize the distance to the asteroids and thereby minimising the requirement for long camera integration times. A new method of space-based detection of moving non-stellar objects is currently being developed utilising the Advanced Stellar Compass (ASC) built for spacecraft attitude determination by Ørsted, Danish Technical University. The ASC serves as a backbone technology in the project as it is capable of fully automated distinction of known and unknown celestial objects. By only processing objects of particular interest, i.e. moving objects, it will be possible to discover small bodies with a minimum of ground control, with the ultimate ambition of a fully automated space search probe. Currently, the ASC is being mounted on the Flying Laptop satellite of the Institute of Space Systems, Universität Stuttgart. It will, after a launch into a low Earth polar orbit in 2008, test the detection method with the ASC equipment that already had significant in-flight experience. A future use of the ASC based automated detection of small bodies is currently on a preliminary stage and known as the Bering project - a deep space survey to the asteroid Main-Belt. With a successful detection method, the Bering mission is expected to discover approximately 6 new small objects per day and 1 will thus during the course of a few years discover 5,000-10,000 new sub-kilometer asteroids. Discovery of new small bodies can: 1) Provide further links between groups of meteorites. 2) Constrain the cratering rate at planetary surfaces and thus allow significantly improved cratering ages for terrains on Mars and other planets. 3) Help determine processes that transfer small asteroids from orbits in the asteroid Main-Belt to the inner Solar System. 2
Spatial Orientation from Motion-Produced Blur Patterns: Detection of Curvature Change.
1978-08-01
3.0 2.3 1.6 1.1 .8 16 3.8 3.0 2.3 1.4 1.1 300 Le f t Fixation Frequency (he r t z ) Veloci ty (°/sec) 1/4 1/2 1 2 4 4 2.8 2.4 1.8 1.3 1.2 8 4.0 3.0... principle of minimum object change which implies that the perceptual tendency in a case like this is to see a rigid object moving in translation, neither...stretching nor binding nor twisting as a helical pattern would be required to do. Johansson notes that the principle may not hold up for complex
Object detection and tracking system
Ma, Tian J.
2017-05-30
Methods and apparatuses for analyzing a sequence of images for an object are disclosed herein. In a general embodiment, the method identifies a region of interest in the sequence of images. The object is likely to move within the region of interest. The method divides the region of interest in the sequence of images into sections and calculates signal-to-noise ratios for a section in the sections. A signal-to-noise ratio for the section is calculated using the section in the image, a prior section in a prior image to the image, and a subsequent section in a subsequent image to the image. The signal-to-noise ratios are for potential velocities of the object in the section. The method also selects a velocity from the potential velocities for the object in the section using a potential velocity in the potential velocities having a highest signal-to-noise ratio in the signal-to-noise ratios.
Laser-Based Trespassing Prediction in Restrictive Environments: A Linear Approach
Cheein, Fernando Auat; Scaglia, Gustavo
2012-01-01
Stationary range laser sensors for intruder monitoring, restricted space violation detections and workspace determination are extensively used in risky environments. In this work we present a linear based approach for predicting the presence of moving agents before they trespass a laser-based restricted space. Our approach is based on the Taylor's series expansion of the detected objects' movements. The latter makes our proposal suitable for embedded applications. In the experimental results (carried out in different scenarios) presented herein, our proposal shows 100% of effectiveness in predicting trespassing situations. Several implementation results and statistics analysis showing the performance of our proposal are included in this work.
Downstream Fabry-Perot interferometer for acoustic wave monitoring in photoacoustic tomography.
Nuster, Robert; Gruen, Hubert; Reitinger, Bernhard; Burgholzer, Peter; Gratt, Sibylle; Passler, Klaus; Paltauf, Guenther
2011-03-15
An optical detection setup consisting of a focused laser beam fed into a downstream Fabry-Perot interferometer (FPI) for demodulation of acoustically generated optical phase variations is investigated for its applicability in photoacoustic tomography. The device measures the time derivative of acoustic signals integrated along the beam. Compared to a setup where the detection beam is part of a Mach-Zehnder interferometer, the signal-to-noise ratio of the FPI is lower, but the image quality of the two devices is similar. Using the FPI in a photoacoustic tomograph allows scanning the probe beam around the imaging object without moving the latter.
The temporal dynamics of heading perception in the presence of moving objects
Fajen, Brett R.
2015-01-01
Many forms of locomotion rely on the ability to accurately perceive one's direction of locomotion (i.e., heading) based on optic flow. Although accurate in rigid environments, heading judgments may be biased when independently moving objects are present. The aim of this study was to systematically investigate the conditions in which moving objects influence heading perception, with a focus on the temporal dynamics and the mechanisms underlying this bias. Subjects viewed stimuli simulating linear self-motion in the presence of a moving object and judged their direction of heading. Experiments 1 and 2 revealed that heading perception is biased when the object crosses or almost crosses the observer's future path toward the end of the trial, but not when the object crosses earlier in the trial. Nonetheless, heading perception is not based entirely on the instantaneous optic flow toward the end of the trial. This was demonstrated in Experiment 3 by varying the portion of the earlier part of the trial leading up to the last frame that was presented to subjects. When the stimulus duration was long enough to include the part of the trial before the moving object crossed the observer's path, heading judgments were less biased. The findings suggest that heading perception is affected by the temporal evolution of optic flow. The time course of dorsal medial superior temporal area (MSTd) neuron responses may play a crucial role in perceiving heading in the presence of moving objects, a property not captured by many existing models. PMID:26510765
Robot environment expert system
NASA Technical Reports Server (NTRS)
Potter, J. L.
1985-01-01
The Robot Environment Expert System uses a hexidecimal tree data structure to model a complex robot environment where not only the robot arm moves, but also the robot itself and other objects may move. The hextree model allows dynamic updating, collision avoidance and path planning over time, to avoid moving objects.
NASA Astrophysics Data System (ADS)
Yamada, Masayoshi; Fukuzawa, Masayuki; Kitsunezuka, Yoshiki; Kishida, Jun; Nakamori, Nobuyuki; Kanamori, Hitoshi; Sakurai, Takashi; Kodama, Souichi
1995-05-01
In order to detect pulsation from a series of noisy ultrasound-echo moving images of a newborn baby's head for pediatric diagnosis, a digital image processing system capable of recording at the video rate and processing the recorded series of images was constructed. The time-sequence variations of each pixel value in a series of moving images were analyzed and then an algorithm based on Fourier transform was developed for the pulsation detection, noting that the pulsation associated with blood flow was periodically changed by heartbeat. Pulsation detection for pediatric diagnosis was successfully made from a series of noisy ultrasound-echo moving images of newborn baby's head by using the image processing system and the pulsation detection algorithm developed here.
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
Drosophila increase exploration after visually detecting predators.
de la Flor, Miguel; Chen, Lijian; Manson-Bishop, Claire; Chu, Tzu-Chun; Zamora, Kathya; Robbins, Danielle; Gunaratne, Gemunu; Roman, Gregg
2017-01-01
Novel stimuli elicit behaviors that are collectively known as specific exploration. These behaviors allow the animal to become more familiar with the novel objects within its environment. Specific exploration is frequently suppressed by defensive reactions to predator cues. Herein, we examine if this suppression occurs in Drosophila melanogaster by measuring the response of these flies to wild harvested predators. The flies used in our experiments have been cultured and had not lived under predator threat for multiple decades. In a circular arena with centrally-caged predators, wild type Drosophila actively avoided the pantropical jumping spider, Plexippus paykulli, and the Texas unicorn mantis, Phyllovates chlorophaena, indicating an innate defensive reaction to these predators. Interestingly, wild type Drosophila males also avoided a centrally-caged mock spider, and the avoidance of the mock spider became exaggerated when it was made to move within the cage. Visually impaired Drosophila failed to detect and avoid the Plexippus paykulli and the moving mock spider, while the broadly anosmic orco2 mutants were fully capable of detecting and avoiding Plexippus paykulli, indicating that these flies principally relied upon vison to perceive the predator stimuli. During early exploration of the arena, exploratory activity increased in the presence of Plexippus paykulli and the moving mock spider. The elevated activity induced by Plexippus paykulli disappeared after the fly had finished exploring, suggesting the flies were capable of habituating the predator cues. Taken together, these results indicate that despite being isolated from predators for decades Drosophila will visually detect these predators, retain innate defensive behaviors, respond by increasing exploratory activity in the arena rather than suppressing activity, and may habituate to normal predator cues.
Drosophila increase exploration after visually detecting predators
Manson-Bishop, Claire; Chu, Tzu-Chun; Zamora, Kathya; Robbins, Danielle; Gunaratne, Gemunu
2017-01-01
Novel stimuli elicit behaviors that are collectively known as specific exploration. These behaviors allow the animal to become more familiar with the novel objects within its environment. Specific exploration is frequently suppressed by defensive reactions to predator cues. Herein, we examine if this suppression occurs in Drosophila melanogaster by measuring the response of these flies to wild harvested predators. The flies used in our experiments have been cultured and had not lived under predator threat for multiple decades. In a circular arena with centrally-caged predators, wild type Drosophila actively avoided the pantropical jumping spider, Plexippus paykulli, and the Texas unicorn mantis, Phyllovates chlorophaena, indicating an innate defensive reaction to these predators. Interestingly, wild type Drosophila males also avoided a centrally-caged mock spider, and the avoidance of the mock spider became exaggerated when it was made to move within the cage. Visually impaired Drosophila failed to detect and avoid the Plexippus paykulli and the moving mock spider, while the broadly anosmic orco2 mutants were fully capable of detecting and avoiding Plexippus paykulli, indicating that these flies principally relied upon vison to perceive the predator stimuli. During early exploration of the arena, exploratory activity increased in the presence of Plexippus paykulli and the moving mock spider. The elevated activity induced by Plexippus paykulli disappeared after the fly had finished exploring, suggesting the flies were capable of habituating the predator cues. Taken together, these results indicate that despite being isolated from predators for decades Drosophila will visually detect these predators, retain innate defensive behaviors, respond by increasing exploratory activity in the arena rather than suppressing activity, and may habituate to normal predator cues. PMID:28746346
A semantic autonomous video surveillance system for dense camera networks in Smart Cities.
Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio
2012-01-01
This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.
A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities
Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M.; Carro, Belén; Sánchez-Esguevillas, Antonio
2012-01-01
This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network. PMID:23112607
Haga, Yoshihiro; Chida, Koichi; Inaba, Yohei; Kaga, Yuji; Meguro, Taiichiro; Zuguchi, Masayuki
2016-02-01
As the use of diagnostic X-ray equipment with flat panel detectors (FPDs) has increased, so has the importance of proper management of FPD systems. To ensure quality control (QC) of FPD system, an easy method for evaluating FPD imaging performance for both stationary and moving objects is required. Until now, simple rotatable QC phantoms have not been available for the easy evaluation of the performance (spatial resolution and dynamic range) of FPD in imaging moving objects. We developed a QC phantom for this purpose. It consists of three thicknesses of copper and a rotatable test pattern of piano wires of various diameters. Initial tests confirmed its stable performance. Our moving phantom is very useful for QC of FPD images of moving objects because it enables visual evaluation of image performance (spatial resolution and dynamic range) easily.
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1991-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.
Gao, Han; Li, Jingwen
2014-06-19
A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB.
Gao, Han; Li, Jingwen
2014-01-01
A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB. PMID:24949640
ERIC Educational Resources Information Center
Kemp, Andrew
2005-01-01
Everything moves. Even apparently stationary objects such as houses, roads, or mountains are moving because they sit on a spinning planet orbiting the Sun. Not surprisingly, the concepts of motion and the forces that affect moving objects are an integral part of the middle school science curriculum. However, middle school students are often taught…
ERIC Educational Resources Information Center
Houlrik, Jens Madsen
2009-01-01
The Lorentz transformation applies directly to the kinematics of moving particles viewed as geometric points. Wave propagation, on the other hand, involves moving planes which are extended objects defined by simultaneity. By treating a plane wave as a geometric object moving at the phase velocity, novel results are obtained that illustrate the…
Permeation of limonene through disposable nitrile gloves using a dextrous robot hand
Banaee, Sean; S Que Hee, Shane
2017-01-01
Objectives: The purpose of this study was to investigate the permeation of the low-volatile solvent limonene through different disposable, unlined, unsupported, nitrile exam whole gloves (blue, purple, sterling, and lavender, from Kimberly-Clark). Methods: This study utilized a moving and static dextrous robot hand as part of a novel dynamic permeation system that allowed sampling at specific times. Quantitation of limonene in samples was based on capillary gas chromatography-mass spectrometry and the internal standard method (4-bromophenol). Results: The average post-permeation thicknesses (before reconditioning) for all gloves for both the moving and static hand were more than 10% of the pre-permeation ones (P≤0.05), although this was not so on reconditioning. The standardized breakthrough times and steady-state permeation periods were similar for the blue, purple, and sterling gloves. Both methods had similar sensitivity. The lavender glove showed a higher permeation rate (0.490±0.031 μg/cm2/min) for the moving robotic hand compared to the non-moving hand (P≤0.05), this being ascribed to a thickness threshold. Conclusions: Permeation parameters for the static and dynamic robot hand models indicate that both methods have similar sensitivity in detecting the analyte during permeation and the blue, purple, and sterling gloves behave similarly during the permeation process whether moving or non-moving. PMID:28111415
Scientific results obtained by the Busot observatory
NASA Astrophysics Data System (ADS)
García-Lozano, R.; Rodes, J. J.; Torrejón, J. M.; Bernabéu, G.; Berná, J. Á.
2016-12-01
We present the discovery of three new W UMa systems by our group as a part of a photometric follow-up of variable stars carried out with the Busot observatory 36 cm robotic telescope in collaboration with the X-ray astronomy group at University of Alicante (Alicante, Spain). Specifically we show the high limiting magnitude to detect moving objects (V˜ 21 mag), and the high stability and accuracy attained in photometry which allow us to measure very shallow planet transits.
2003-08-14
KENNEDY SPACE CENTER, FLA. - In the mobile service tower on Launch Pad 17-B, Cape Canaveral Air Force Station, workers move the first half of the fairing around the Space Infrared Telescope Facility (SIRTF) behind it for encapsulation. SIRTF will obtain images and spectra by detecting the infrared energy, or heat, radiated by objects in space. Consisting of a 0.85-meter telescope and three cryogenically cooled science instruments, SIRTF will be the largest infrared telescope ever launched into space. It is the fourth and final element in NASA’s family of orbiting “Great Observatories.” Its highly sensitive instruments will give a unique view of the Universe and peer into regions of space that are hidden from optical telescopes.
2003-08-14
KENNEDY SPACE CENTER, FLA. - In the mobile service tower on Launch Pad 17-B, Cape Canaveral Air Force Station, the top of the fairing is seen as it moves into place around the Space Infrared Telescope Facility (SIRTF). SIRTF will obtain images and spectra by detecting the infrared energy, or heat, radiated by objects in space. Consisting of a 0.85-meter telescope and three cryogenically cooled science instruments, SIRTF will be the largest infrared telescope ever launched into space. It is the fourth and final element in NASA’s family of orbiting “Great Observatories.” Its highly sensitive instruments will give a unique view of the Universe and peer into regions of space that are hidden from optical telescopes.
2003-08-14
KENNEDY SPACE CENTER, FLA. - In the mobile service tower on Launch Pad 17-B, Cape Canaveral Air Force Station, the first half of the fairing is moved around the Space Infrared Telescope Facility (SIRTF). SIRTF will obtain images and spectra by detecting the infrared energy, or heat, radiated by objects in space. Consisting of a 0.85-meter telescope and three cryogenically cooled science instruments, SIRTF will be the largest infrared telescope ever launched into space. It is the fourth and final element in NASA’s family of orbiting “Great Observatories.” Its highly sensitive instruments will give a unique view of the Universe and peer into regions of space that are hidden from optical telescopes.
2003-08-14
KENNEDY SPACE CENTER, FLA. - In the mobile service tower on Launch Pad 17-B, Cape Canaveral Air Force Station, the first half of the fairing (background) moves toward the Space Infrared Telescope Facility (foreground) for encapsulation. SIRTF will obtain images and spectra by detecting the infrared energy, or heat, radiated by objects in space. Consisting of a 0.85-meter telescope and three cryogenically cooled science instruments, SIRTF will be the largest infrared telescope ever launched into space. It is the fourth and final element in NASA’s family of orbiting “Great Observatories.” Its highly sensitive instruments will give a unique view of the Universe and peer into regions of space that are hidden from optical telescopes.
2003-08-14
KENNEDY SPACE CENTER, FLA. - In the mobile service tower on Launch Pad 17-B, Cape Canaveral Air Force Station, workers watch as the first half of the fairing moves closer around the Space Infrared Telescope Facility (SIRTF). SIRTF will obtain images and spectra by detecting the infrared energy, or heat, radiated by objects in space. Consisting of a 0.85-meter telescope and three cryogenically cooled science instruments, SIRTF will be the largest infrared telescope ever launched into space. It is the fourth and final element in NASA’s family of orbiting “Great Observatories.” Its highly sensitive instruments will give a unique view of the Universe and peer into regions of space that are hidden from optical telescopes.
Anomaly detection in forward looking infrared imaging using one-class classifiers
NASA Astrophysics Data System (ADS)
Popescu, Mihail; Stone, Kevin; Havens, Timothy; Ho, Dominic; Keller, James
2010-04-01
In this paper we describe a method for generating cues of possible abnormal objects present in the field of view of an infrared (IR) camera installed on a moving vehicle. The proposed method has two steps. In the first step, for each frame, we generate a set of possible points of interest using a corner detection algorithm. In the second step, the points related to the background are discarded from the point set using an one class classifier (OCC) trained on features extracted from a local neighborhood of each point. The advantage of using an OCC is that we do not need examples from the "abnormal object" class to train the classifier. Instead, OCC is trained using corner points from images known to be abnormal object free, i.e., that contain only background scenes. To further reduce the number of false alarms we use a temporal fusion procedure: a region has to be detected as "interesting" in m out of n, m
Real Objects Can Impede Conditional Reasoning but Augmented Objects Do Not.
Sato, Yuri; Sugimoto, Yutaro; Ueda, Kazuhiro
2018-03-01
In this study, Knauff and Johnson-Laird's (2002) visual impedance hypothesis (i.e., mental representations with irrelevant visual detail can impede reasoning) is applied to the domain of external representations and diagrammatic reasoning. We show that the use of real objects and augmented real (AR) objects can control human interpretation and reasoning about conditionals. As participants made inferences (e.g., an invalid one from "if P then Q" to "P"), they also moved objects corresponding to premises. Participants who moved real objects made more invalid inferences than those who moved AR objects and those who did not manipulate objects (there was no significant difference between the last two groups). Our results showed that real objects impeded conditional reasoning, but AR objects did not. These findings are explained by the fact that real objects may over-specify a single state that exists, while AR objects suggest multiple possibilities. Copyright © 2017 Cognitive Science Society, Inc.
Using the time shift in single pushbroom datatakes to detect ships and their heading
NASA Astrophysics Data System (ADS)
Willburger, Katharina A. M.; Schwenk, Kurt
2017-10-01
The detection of ships from remote sensing data has become an essential task for maritime security. The variety of application scenarios includes piracy, illegal fishery, ocean dumping and ships carrying refugees. While techniques using data from SAR sensors for ship detection are widely common, there is only few literature discussing algorithms based on imagery of optical camera systems. A ship detection algorithm for optical pushbroom data has been developed. It takes advantage of the special detector assembly of most of those scanners, which allows apart from the detection of a ship also the calculation of its heading out of a single acquisition. The proposed algorithm for the detection of moving ships was developed with RapidEye imagery. It algorithm consists mainly of three steps: the creation of a land-watermask, the object extraction and the deeper examination of each single object. The latter step is built up by several spectral and geometric filters, making heavy use of the inter-channel displacement typical for pushbroom sensors with multiple CCD lines, finally yielding a set of ships and their direction of movement. The working principle of time-shifted pushbroom sensors and the developed algorithm is explained in detail. Furthermore, we present our first results and give an outlook to future improvements.
Dynamic Metasurface Aperture as Smart Around-the-Corner Motion Detector.
Del Hougne, Philipp; F Imani, Mohammadreza; Sleasman, Timothy; Gollub, Jonah N; Fink, Mathias; Lerosey, Geoffroy; Smith, David R
2018-04-25
Detecting and analysing motion is a key feature of Smart Homes and the connected sensor vision they embrace. At present, most motion sensors operate in line-of-sight Doppler shift schemes. Here, we propose an alternative approach suitable for indoor environments, which effectively constitute disordered cavities for radio frequency (RF) waves; we exploit the fundamental sensitivity of modes of such cavities to perturbations, caused here by moving objects. We establish experimentally three key features of our proposed system: (i) ability to capture the temporal variations of motion and discern information such as periodicity ("smart"), (ii) non line-of-sight motion detection, and (iii) single-frequency operation. Moreover, we explain theoretically and demonstrate experimentally that the use of dynamic metasurface apertures can substantially enhance the performance of RF motion detection. Potential applications include accurately detecting human presence and monitoring inhabitants' vital signs.
Servo-controlled intravital microscope system
NASA Technical Reports Server (NTRS)
Mansour, M. N.; Wayland, H. J.; Chapman, C. P. (Inventor)
1975-01-01
A microscope system is described for viewing an area of a living body tissue that is rapidly moving, by maintaining the same area in the field-of-view and in focus. A focus sensing portion of the system includes two video cameras at which the viewed image is projected, one camera being slightly in front of the image plane and the other slightly behind it. A focus sensing circuit for each camera differentiates certain high frequency components of the video signal and then detects them and passes them through a low pass filter, to provide dc focus signal whose magnitudes represent the degree of focus. An error signal equal to the difference between the focus signals, drives a servo that moves the microscope objective so that an in-focus view is delivered to an image viewing/recording camera.
Localization and tracking of moving objects in two-dimensional space by echolocation.
Matsuo, Ikuo
2013-02-01
Bats use frequency-modulated echolocation to identify and capture moving objects in real three-dimensional space. Experimental evidence indicates that bats are capable of locating static objects with a range accuracy of less than 1 μs. A previously introduced model estimates ranges of multiple, static objects using linear frequency modulation (LFM) sound and Gaussian chirplets with a carrier frequency compatible with bat emission sweep rates. The delay time for a single object was estimated with an accuracy of about 1.3 μs by measuring the echo at a low signal-to-noise ratio (SNR). The range accuracy was dependent not only on the SNR but also the Doppler shift, which was dependent on the movements. However, it was unclear whether this model could estimate the moving object range at each timepoint. In this study, echoes were measured from the rotating pole at two receiving points by intermittently emitting LFM sounds. The model was shown to localize moving objects in two-dimensional space by accurately estimating the object's range at each timepoint.
Detection and Localization of Vibrotactile Signals in Moving Vehicles
2008-05-01
Detection and Localization of Vibrotactile Signals in Moving Vehicles by Andrea S . Krausman and Timothy L. White ARL-TR-4463 May 2008...Proving Ground, MD 21005-5425 ARL-TR-4463 May 2008 Detection and Localization of Vibrotactile Signals in Moving Vehicles Andrea S ...5e. TASK NUMBER 6. AUTHOR( S ) Andrea S . Krausman and Timothy L. White (both of ARL) 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION
Self-Learning Embedded System for Object Identification in Intelligent Infrastructure Sensors.
Villaverde, Monica; Perez, David; Moreno, Felix
2015-11-17
The emergence of new horizons in the field of travel assistant management leads to the development of cutting-edge systems focused on improving the existing ones. Moreover, new opportunities are being also presented since systems trend to be more reliable and autonomous. In this paper, a self-learning embedded system for object identification based on adaptive-cooperative dynamic approaches is presented for intelligent sensor's infrastructures. The proposed system is able to detect and identify moving objects using a dynamic decision tree. Consequently, it combines machine learning algorithms and cooperative strategies in order to make the system more adaptive to changing environments. Therefore, the proposed system may be very useful for many applications like shadow tolls since several types of vehicles may be distinguished, parking optimization systems, improved traffic conditions systems, etc.
Calabro, Finnegan J.; Beardsley, Scott A.; Vaina, Lucia M.
2012-01-01
Estimation of time-to-arrival for moving objects is critical to obstacle interception and avoidance, as well as to timing actions such as reaching and grasping moving objects. The source of motion information that conveys arrival time varies with the trajectory of the object raising the question of whether multiple context-dependent mechanisms are involved in this computation. To address this question we conducted a series of psychophysical studies to measure observers’ performance on time-to-arrival estimation when object trajectory was specified by angular motion (“gap closure” trajectories in the frontoparallel plane), looming (colliding trajectories, TTC) or both (passage courses, TTP). We measured performance of time-to-arrival judgments in the presence of irrelevant motion, in which a perpendicular motion vector was added to the object trajectory. Data were compared to models of expected performance based on the use of different components of optical information. Our results demonstrate that for gap closure, performance depended only on the angular motion, whereas for TTC and TTP, both angular and looming motion affected performance. This dissociation of inputs suggests that gap closures are mediated by a separate mechanism than that used for the detection of time-to-collision and time-to-passage. We show that existing models of TTC and TTP estimation make systematic errors in predicting subject performance, and suggest that a model which weights motion cues by their relative time-to-arrival provides a better account of performance. PMID:22056519
Danger detection and escape behaviour in wood crickets.
Dupuy, Fabienne; Casas, Jérôme; Body, Mélanie; Lazzari, Claudio R
2011-07-01
The wind-sensitive cercal system of Orthopteroid insects that mediates the detection of the approach of a predator is a very sensitive sensory system. It has been intensively analysed from a behavioural and neurobiological point of view, and constitutes a classical model system in neuroethology. The escape behaviour is triggered in orthopteroids by the detection of air-currents produced by approaching objects, allowing these insects to keep away from potential dangers. Nevertheless, escape behaviour has not been studied in terms of success. Moreover, an attacking predator is more than "air movement", it is also a visible moving entity. The sensory basis of predator detection is thus probably more complex than the perception of air movement by the cerci. We have used a piston mimicking an attacking running predator for a quantitative evaluation of the escape behaviour of wood crickets Nemobius sylvestris. The movement of the piston not only generates air movement, but it can be seen by the insect and can touch it as a natural predator. This procedure allowed us to study the escape behaviour in terms of detection and also in terms of success. Our results showed that 5-52% of crickets that detected the piston thrust were indeed touched. Crickets escaped to stimulation from behind better than to a stimulation from the front, even though they detected the approaching object similarly in both cases. After cerci ablation, 48% crickets were still able to detect a piston approaching from behind (compared with 79% of detection in intact insects) and 24% crickets escaped successfully (compared with 62% in the case of intact insects). So, cerci play a major role in the detection of an approaching object but other mechanoreceptors or sensory modalities are implicated in this detection. It is not possible to assure that other sensory modalities participate (in the case of intact animals) in the behaviour; rather, than in the absence of cerci other sensory modalities can partially mediate the behaviour. Nevertheless, neither antennae nor eyes seem to be used for detecting approaching objects, as their inactivation did not reduce their detection and escape abilities in the presence of cerci. Copyright © 2011 Elsevier Ltd. All rights reserved.
System and method for moving a probe to follow movements of tissue
NASA Technical Reports Server (NTRS)
Feldstein, C.; Andrews, T. W.; Crawford, D. W.; Cole, M. A. (Inventor)
1981-01-01
An apparatus is described for moving a probe that engages moving living tissue such as a heart or an artery that is penetrated by the probe, which moves the probe in synchronism with the tissue to maintain the probe at a constant location with respect to the tissue. The apparatus includes a servo positioner which moves a servo member to maintain a constant distance from a sensed object while applying very little force to the sensed object, and a follower having a stirrup at one end resting on a surface of the living tissue and another end carrying a sensed object adjacent to the servo member. A probe holder has one end mounted on the servo member and another end which holds the probe.
Certainty grids for mobile robots
NASA Technical Reports Server (NTRS)
Moravec, H. P.
1987-01-01
A numerical representation of uncertain and incomplete sensor knowledge called Certainty Grids has been used successfully in several mobile robot control programs, and has proven itself to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems. Researchers propose to build a software framework running on processors onboard the new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot's surroundings as it moves. The certainty grid representation will allow this map to be incrementally updated in a uniform way from various sources including sonar, stereo vision, proximity and contact sensors. The approach can correctly model the fuzziness of each reading, while at the same time combining multiple measurements to produce sharper map features, and it can deal correctly with uncertainties in the robot's motion. The map will be used by planning programs to choose clear paths, identify locations (by correlating maps), identify well-known and insufficiently sensed terrain, and perhaps identify objects by shape. The certainty grid representation can be extended in the same dimension and used to detect and track moving objects.
Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo
2015-01-01
A content-matched (CM) range monitoring query over moving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CM range monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods. PMID:26393613
NASA Astrophysics Data System (ADS)
Kaplan, M. L.; van Cleve, J. E.; Alcock, C.
2003-12-01
Detection and characterization of the small bodies of the outer solar system presents unique challenges to terrestrial based sensing systems, principally the inverse 4th power decrease of reflected and thermal signals with target distance from the Sun. These limits are surpassed by new techniques [1,2,3] employing star-object occultation event sensing, which are capable of detecting sub-kilometer objects in the Kuiper Belt and Oort cloud. This poster will present an instrument and space mission concept based on adaptations of the NASA Discovery Kepler program currently in development at Ball Aerospace and Technologies Corp. Instrument technologies to enable this space science mission are being pursued and will be described. In particular, key attributes of an optimized payload include the ability to provide: 1) Coarse spectral resolution (using an objective spectrometer approach) 2) Wide FOV, simultaneous object monitoring (up to 150,000 stars employing select data regions within a large focal plane mosaic) 3) Fast temporal frame integration and readout architectures (10 to 50 msec for each monitored object) 4) Real-time, intelligent change detection processing (to limit raw data volumes) The Minor Body Surveyor combines the focal plane and processing technology elements into a densely packaged format to support general space mission issues of mass and power consumption, as well as telemetry resources. Mode flexibility is incorporated into the real-time processing elements to allow for either temporal (Occultations) or spatial (Moving targets) change detection. In addition, a basic image capture mode is provided for general pointing and field reference measurements. The overall space mission architecture is described as well. [1] M. E. Bailey. Can 'Invisible' Bodies be Observed in the Solar System. Nature, 259:290-+, January 1976. [2] T. S. Axelrod, C. Alcock, K. H. Cook, and H.-S. Park. A Direct Census of the Oort Cloud with a Robotic Telescope. In ASP Conf. Ser. 34: Robotic Telescopes in the 1990s, pages 171-181, 1992. [3] F. Roques and M. Moncuquet. A Detection Method for Small Kuiper Belt Objects: The Search for Stellar Occultations. Icarus, 147:530-544, October 2000.
Shallow water imaging sonar system for environmental surveying. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-05-01
The scope of this research is to develop a shallow water sonar system designed to detect and map the location of objects such as hazardous wastes or discarded ordnance in coastal waters. The system will use high frequency wide-bandwidth imaging sonar, mounted on a moving platform towed behind a boat, to detect and identify objects on the sea bottom. Resolved images can be obtained even if the targets are buried in an overlayer of silt. The specific technical objective of this research was to develop and test a prototype system that is capable of (1) scan at high speeds (upmore » to 10m/s), even in shallow water (depth to ten meters), without motion blurring or loss of resolution; (2) produce images of the bottom structure that are detailed enough for unambiguous detection of objects as small as 15cm, even if they are buried up to 30cm deep in silt or sand. The critical technology involved uses an linear FM (LFM) or similar complex waveform, which has a high bandwidth for good range resolution, with a long pulse length for similar Dopper resolution. The lone duration signal deposits more energy on target than a narrower pulse, which increases the signal-to-noise ratio and signal-to-clutter ratio. This in turn allows the use of cheap, lightweight, low power, piezoelectric transducers at the 30--500 kHz range.« less
Online phase measuring profilometry for rectilinear moving object by image correction
NASA Astrophysics Data System (ADS)
Yuan, Han; Cao, Yi-Ping; Chen, Chen; Wang, Ya-Pin
2015-11-01
In phase measuring profilometry (PMP), the object must be static for point-to-point reconstruction with the captured deformed patterns. While the object is rectilinearly moving online, the size and pixel position differences of the object in different captured deformed patterns do not meet the point-to-point requirement. We propose an online PMP based on image correction to measure the three-dimensional shape of the rectilinear moving object. In the proposed method, the deformed patterns captured by a charge-coupled diode camera are reprojected from the oblique view to an aerial view first and then translated based on the feature points of the object. This method makes the object appear stationary in the deformed patterns. Experimental results show the feasibility and efficiency of the proposed method.
Image Analysis Based on Soft Computing and Applied on Space Shuttle During the Liftoff Process
NASA Technical Reports Server (NTRS)
Dominquez, Jesus A.; Klinko, Steve J.
2007-01-01
Imaging techniques based on Soft Computing (SC) and developed at Kennedy Space Center (KSC) have been implemented on a variety of prototype applications related to the safety operation of the Space Shuttle during the liftoff process. These SC-based prototype applications include detection and tracking of moving Foreign Objects Debris (FOD) during the Space Shuttle liftoff, visual anomaly detection on slidewires used in the emergency egress system for the Space Shuttle at the laJlIlch pad, and visual detection of distant birds approaching the Space Shuttle launch pad. This SC-based image analysis capability developed at KSC was also used to analyze images acquired during the accident of the Space Shuttle Columbia and estimate the trajectory and velocity of the foam that caused the accident.
Brain Activation during Spatial Updating and Attentive Tracking of Moving Targets
ERIC Educational Resources Information Center
Jahn, Georg; Wendt, Julia; Lotze, Martin; Papenmeier, Frank; Huff, Markus
2012-01-01
Keeping aware of the locations of objects while one is moving requires the updating of spatial representations. As long as the objects are visible, attentional tracking is sufficient, but knowing where objects out of view went in relation to one's own body involves an updating of spatial working memory. Here, multiple object tracking was employed…
Preliminary investigation of motion requirements for the simulation of helicopter hover tasks
NASA Technical Reports Server (NTRS)
Parrish, R. V.
1980-01-01
Data from a preliminary experiment are presented which attempted to define a helicopter hover task that would allow the detection of objectively-measured differences in fixed base/moving base simulator performance. The addition of heave, pitch, and roll movement of a ship at sea to the hover task, by means of an adaption of a simulator g-seat, potentially fulfills the desired definition. The feasibility of g-seat substitution for platform motion can be investigated utilizing this task.
Mach Cones in a Coulomb Lattice and a Dusty Plasma
NASA Astrophysics Data System (ADS)
Samsonov, D.; Goree, J.; Ma, Z. W.; Bhattacharjee, A.; Thomas, H. M.; Morfill, G. E.
1999-11-01
Mach cones, or V-shaped disturbances created by supersonic objects, have been detected in a two-dimensional Coulomb crystal. Electrically charged microspheres levitated in a glow-discharge plasma formed a dusty plasma, with particles arranged in a hexagonal lattice in a horizontal plane. Beneath this lattice plane, a sphere moved faster than the lattice sound speed. Mach cones were double, first compressive then rarefactive, due to the strongly coupled crystalline state. Molecular dynamics simulations using a Yukawa potential also show multiple Mach cones.
Wired and Wireless Camera Triggering with Arduino
NASA Astrophysics Data System (ADS)
Kauhanen, H.; Rönnholm, P.
2017-10-01
Synchronous triggering is an important task that allows simultaneous data capture from multiple cameras. Accurate synchronization enables 3D measurements of moving objects or from a moving platform. In this paper, we describe one wired and four wireless variations of Arduino-based low-cost remote trigger systems designed to provide a synchronous trigger signal for industrial cameras. Our wireless systems utilize 315 MHz or 434 MHz frequencies with noise filtering capacitors. In order to validate the synchronization accuracy, we developed a prototype of a rotating trigger detection system (named RoTriDeS). This system is suitable to detect the triggering accuracy of global shutter cameras. As a result, the wired system indicated an 8.91 μs mean triggering time difference between two cameras. Corresponding mean values for the four wireless triggering systems varied between 7.92 and 9.42 μs. Presented values include both camera-based and trigger-based desynchronization. Arduino-based triggering systems appeared to be feasible, and they have the potential to be extended to more complicated triggering systems.
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.
Stone, Scott A; Tata, Matthew S
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality
Tata, Matthew S.
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518
Aksiuta, E F; Ostashev, A V; Sergeev, E V; Aksiuta, V E
1997-01-01
The methods of the information (entropy) error theory were used to make a metrological analysis of the well-known commercial measuring systems for timing an anticipative reaction (AR) to the position of a moving object, which is based on the electromechanical, gas-discharge, and electron principles. The required accuracy of measurement was ascertained to be achieved only by using the systems based on the electron principle of moving object simulation and AR measurement.
Make the First Move: How Infants Learn about Self-Propelled Objects
ERIC Educational Resources Information Center
Rakison, David H.
2006-01-01
In 3 experiments, the author investigated 16- to 20-month-old infants' attention to dynamic and static parts in learning about self-propelled objects. In Experiment 1, infants were habituated to simple noncausal events in which a geometric figure with a single moving part started to move without physical contact from an identical geometric figure…
A method of real-time detection for distant moving obstacles by monocular vision
NASA Astrophysics Data System (ADS)
Jia, Bao-zhi; Zhu, Ming
2013-12-01
In this paper, we propose an approach for detection of distant moving obstacles like cars and bicycles by a monocular camera to cooperate with ultrasonic sensors in low-cost condition. We are aiming at detecting distant obstacles that move toward our autonomous navigation car in order to give alarm and keep away from them. Method of frame differencing is applied to find obstacles after compensation of camera's ego-motion. Meanwhile, each obstacle is separated from others in an independent area and given a confidence level to indicate whether it is coming closer. The results on an open dataset and our own autonomous navigation car have proved that the method is effective for detection of distant moving obstacles in real-time.
Detection of Copy-Rotate-Move Forgery Using Zernike Moments
NASA Astrophysics Data System (ADS)
Ryu, Seung-Jin; Lee, Min-Jeong; Lee, Heung-Kyu
As forgeries have become popular, the importance of forgery detection is much increased. Copy-move forgery, one of the most commonly used methods, copies a part of the image and pastes it into another part of the the image. In this paper, we propose a detection method of copy-move forgery that localizes duplicated regions using Zernike moments. Since the magnitude of Zernike moments is algebraically invariant against rotation, the proposed method can detect a forged region even though it is rotated. Our scheme is also resilient to the intentional distortions such as additive white Gaussian noise, JPEG compression, and blurring. Experimental results demonstrate that the proposed scheme is appropriate to identify the forged region by copy-rotate-move forgery.
Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.
Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung
2018-02-03
A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.
Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor
Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung
2018-01-01
A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods. PMID:29401681
Binocular Perception of 2D Lateral Motion and Guidance of Coordinated Motor Behavior.
Fath, Aaron J; Snapp-Childs, Winona; Kountouriotis, Georgios K; Bingham, Geoffrey P
2016-04-01
Zannoli, Cass, Alais, and Mamassian (2012) found greater audiovisual lag between a tone and disparity-defined stimuli moving laterally (90-170 ms) than for disparity-defined stimuli moving in depth or luminance-defined stimuli moving laterally or in depth (50-60 ms). We tested if this increased lag presents an impediment to visually guided coordination with laterally moving objects. Participants used a joystick to move a virtual object in several constant relative phases with a laterally oscillating stimulus. Both the participant-controlled object and the target object were presented using a disparity-defined display that yielded information through changes in disparity over time (CDOT) or using a luminance-defined display that additionally provided information through monocular motion and interocular velocity differences (IOVD). Performance was comparable for both disparity-defined and luminance-defined displays in all relative phases. This suggests that, despite lag, perception of lateral motion through CDOT is generally sufficient to guide coordinated motor behavior.
Vision-based control for flight relative to dynamic environments
NASA Astrophysics Data System (ADS)
Causey, Ryan Scott
The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.
Capaciflector-guided mechanisms
NASA Technical Reports Server (NTRS)
Vranish, John M. (Inventor)
1996-01-01
A plurality of capaciflector proximity sensors, one or more of which may be overlaid on each other, and at least one shield are mounted on a device guided by a robot so as to see a designated surface, hole or raised portion of an object, for example, in three dimensions. Individual current-measuring voltage follower circuits interface the sensors and shield to a common AC signal source. As the device approaches the object, the sensors respond by a change in the currents therethrough. The currents are detected by the respective current-measuring voltage follower circuits with the outputs thereof being fed to a robot controller. The device is caused to move under robot control in a predetermined pattern over the object while directly referencing each other without any offsets, whereupon by a process of minimization of the sensed currents, the device is dithered or wiggled into position for a soft touchdown or contact without any prior contact with the object.
Object Tracking and Target Reacquisition Based on 3-D Range Data for Moving Vehicles
Lee, Jehoon; Lankton, Shawn; Tannenbaum, Allen
2013-01-01
In this paper, we propose an approach for tracking an object of interest based on 3-D range data. We employ particle filtering and active contours to simultaneously estimate the global motion of the object and its local deformations. The proposed algorithm takes advantage of range information to deal with the challenging (but common) situation in which the tracked object disappears from the image domain entirely and reappears later. To cope with this problem, a method based on principle component analysis (PCA) of shape information is proposed. In the proposed method, if the target disappears out of frame, shape similarity energy is used to detect target candidates that match a template shape learned online from previously observed frames. Thus, we require no a priori knowledge of the target’s shape. Experimental results show the practical applicability and robustness of the proposed algorithm in realistic tracking scenarios. PMID:21486717
NASA Astrophysics Data System (ADS)
Hartung, Christine; Spraul, Raphael; Schuchert, Tobias
2017-10-01
Wide area motion imagery (WAMI) acquired by an airborne multicamera sensor enables continuous monitoring of large urban areas. Each image can cover regions of several square kilometers and contain thousands of vehicles. Reliable vehicle tracking in this imagery is an important prerequisite for surveillance tasks, but remains challenging due to low frame rate and small object size. Most WAMI tracking approaches rely on moving object detections generated by frame differencing or background subtraction. These detection methods fail when objects slow down or stop. Recent approaches for persistent tracking compensate for missing motion detections by combining a detection-based tracker with a second tracker based on appearance or local context. In order to avoid the additional complexity introduced by combining two trackers, we employ an alternative single tracker framework that is based on multiple hypothesis tracking and recovers missing motion detections with a classifierbased detector. We integrate an appearance-based similarity measure, merge handling, vehicle-collision tests, and clutter handling to adapt the approach to the specific context of WAMI tracking. We apply the tracking framework on a region of interest of the publicly available WPAFB 2009 dataset for quantitative evaluation; a comparison to other persistent WAMI trackers demonstrates state of the art performance of the proposed approach. Furthermore, we analyze in detail the impact of different object detection methods and detector settings on the quality of the output tracking results. For this purpose, we choose four different motion-based detection methods that vary in detection performance and computation time to generate the input detections. As detector parameters can be adjusted to achieve different precision and recall performance, we combine each detection method with different detector settings that yield (1) high precision and low recall, (2) high recall and low precision, and (3) best f-score. Comparing the tracking performance achieved with all generated sets of input detections allows us to quantify the sensitivity of the tracker to different types of detector errors and to derive recommendations for detector and parameter choice.
Design of remote control alarm system by microwave detection
NASA Astrophysics Data System (ADS)
Wang, Junli
2018-04-01
A microwave detection remote control alarm system is designed, which is composed of a Microwave detectors, a radio receiving/transmitting module and a digital encoding/decoding IC. When some objects move into the surveillance area, microwave detectors will generate a control signal to start transmitting system. A radio control signal will be spread by the transmitting module, once the signal can be received, and it will be disposed by some circuits, arousing some voices that awake the watching people. The whole device is a modular configuration, it not only has some advantage of frequency stable, but also reliable and adjustment-free, and it is suitable for many kinds of demands within the distance of 100m.
The technology of grating laser Doppler velocimeter for measuring transverse velocity of objects
NASA Astrophysics Data System (ADS)
Zhang, Shu; Lu, Guangfeng; Fan, Zhenfang; Luo, Hui
2014-12-01
In order to lower production cost of Laser Doppler velocimeter (LDV) and simplify the system structure, a grating Doppler detection system has been designed. This LDV was carried out by differential measurement mode. Two beams of diffracted light from the grating are mixed, and the beat frequency will be detected by a detector when the grating is moving. Fundamentals also have been introduced and partial experiment results of this system are given out. The result indicates the experimental value is agreement with the theoretical value. Errors have been analyzed and the main factors affecting the accuracy were discussed. Upon inspection, the inexpensive and ease LDV is efficient to administer and feasible.
Enhanced data validation strategy of air quality monitoring network.
Harkat, Mohamed-Faouzi; Mansouri, Majdi; Nounou, Mohamed; Nounou, Hazem
2018-01-01
Quick validation and detection of faults in measured air quality data is a crucial step towards achieving the objectives of air quality networks. Therefore, the objectives of this paper are threefold: (i) to develop a modeling technique that can be used to predict the normal behavior of air quality variables and help provide accurate reference for monitoring purposes; (ii) to develop fault detection method that can effectively and quickly detect any anomalies in measured air quality data. For this purpose, a new fault detection method that is based on the combination of generalized likelihood ratio test (GLRT) and exponentially weighted moving average (EWMA) will be developed. GLRT is a well-known statistical fault detection method that relies on maximizing the detection probability for a given false alarm rate. In this paper, we propose to develop GLRT-based EWMA fault detection method that will be able to detect the changes in the values of certain air quality variables; (iii) to develop fault isolation and identification method that allows defining the fault source(s) in order to properly apply appropriate corrective actions. In this paper, reconstruction approach that is based on Midpoint-Radii Principal Component Analysis (MRPCA) model will be developed to handle the types of data and models associated with air quality monitoring networks. All air quality modeling, fault detection, fault isolation and reconstruction methods developed in this paper will be validated using real air quality data (such as particulate matter, ozone, nitrogen and carbon oxides measurement). Copyright © 2017 Elsevier Inc. All rights reserved.
On the Detectability of Planet X with LSST
NASA Astrophysics Data System (ADS)
Trilling, David E.; Bellm, Eric C.; Malhotra, Renu
2018-06-01
Two planetary mass objects in the far outer solar system—collectively referred to here as Planet X— have recently been hypothesized to explain the orbital distribution of distant Kuiper Belt Objects. Neither planet is thought to be exceptionally faint, but the sky locations of these putative planets are poorly constrained. Therefore, a wide area survey is needed to detect these possible planets. The Large Synoptic Survey Telescope (LSST) will carry out an unbiased, large area (around 18000 deg2), deep (limiting magnitude of individual frames of 24.5) survey (the “wide-fast-deep (WFD)” survey) of the southern sky beginning in 2022, and it will therefore be an important tool in searching for these hypothesized planets. Here, we explore the effectiveness of LSST as a search platform for these possible planets. Assuming the current baseline cadence (which includes the WFD survey plus additional coverage), we estimate that LSST will confidently detect or rule out the existence of Planet X in 61% of the entire sky. At orbital distances up to ∼75 au, Planet X could simply be found in the normal nightly moving object processing; at larger distances, it will require custom data processing. We also discuss the implications of a nondetection of Planet X in LSST data.
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1992-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.
Murray-Moraleda, Jessica R.; Lohman, Rowena
2010-01-01
The Southern California Earthquake Center (SCEC) is a community of researchers at institutions worldwide working to improve understanding of earthquakes and mitigate earthquake risk. One of SCEC's priority objectives is to “develop a geodetic network processing system that will detect anomalous strain transients.” Given the growing number of continuously recording geodetic networks consisting of hundreds of stations, an automated means for systematically searching data for transient signals, especially in near real time, is critical for network operations, hazard monitoring, and event response. The SCEC Transient Detection Test Exercise began in 2008 to foster an active community of researchers working on this problem, explore promising methods, and combine effective approaches in novel ways. A workshop was held in California to assess what has been learned thus far and discuss areas of focus as the project moves forward.
Meteor studies in the framework of the JEM-EUSO program
NASA Astrophysics Data System (ADS)
Abdellaoui, G.; Abe, S.; Acheli, A.; Adams, J. H.; Ahmad, S.; Ahriche, A.; Albert, J.-N.; Allard, D.; Alonso, G.; Anchordoqui, L.; Andreev, V.; Anzalone, A.; Aouimeur, W.; Arai, Y.; Arsene, N.; Asano, K.; Attallah, R.; Attoui, H.; Ave Pernas, M.; Bacholle, S.; Bakiri, M.; Baragatti, P.; Barrillon, P.; Bartocci, S.; Batsch, T.; Bayer, J.; Bechini, R.; Belenguer, T.; Bellotti, R.; Belov, A.; Belov, K.; Benadda, B.; Benmessai, K.; Berlind, A. A.; Bertaina, M.; Biermann, P. L.; Biktemerova, S.; Bisconti, F.; Blanc, N.; Błȩcki, J.; Blin-Bondil, S.; Bobik, P.; Bogomilov, M.; Bonamente, M.; Boudaoud, R.; Bozzo, E.; Briggs, M. S.; Bruno, A.; Caballero, K. S.; Cafagna, F.; Campana, D.; Capdevielle, J.-N.; Capel, F.; Caramete, A.; Caramete, L.; Carlson, P.; Caruso, R.; Casolino, M.; Cassardo, C.; Castellina, A.; Castellini, G.; Catalano, C.; Catalano, O.; Cellino, A.; Chikawa, M.; Chiritoi, G.; Christl, M. J.; Connaughton, V.; Conti, L.; Cordero, G.; Crawford, H. J.; Cremonini, R.; Csorna, S.; Dagoret-Campagne, S.; De Donato, C.; de la Taille, C.; De Santis, C.; del Peral, L.; Di Martino, M.; Djemil, T.; Djenas, S. A.; Dulucq, F.; Dupieux, M.; Dutan, I.; Ebersoldt, A.; Ebisuzaki, T.; Engel, R.; Eser, J.; Fang, K.; Fenu, F.; Fernández-González, S.; Fernández-Soriano, J.; Ferrarese, S.; Finco, D.; Flamini, M.; Fornaro, C.; Fouka, M.; Franceschi, A.; Franchini, S.; Fuglesang, C.; Fujimoto, J.; Fukushima, M.; Galeotti, P.; García-Ortega, E.; Garipov, G.; Gascón, E.; Geary, J.; Gelmini, G.; Genci, J.; Giraudo, G.; Gonchar, M.; González Alvarado, C.; Gorodetzky, P.; Guarino, F.; Guehaz, R.; Guzmán, A.; Hachisu, Y.; Haiduc, M.; Harlov, B.; Haungs, A.; Hernández Carretero, J.; Hidber, W.; Higashide, K.; Ikeda, D.; Ikeda, H.; Inoue, N.; Inoue, S.; Isgrò, F.; Itow, Y.; Jammer, T.; Joven, E.; Judd, E. G.; Jung, A.; Jochum, J.; Kajino, F.; Kajino, T.; Kalli, S.; Kaneko, I.; Kang, D.; Kanouni, F.; Karadzhov, Y.; Karczmarczyk, J.; Karus, M.; Katahira, K.; Kawai, K.; Kawasaki, Y.; Kedadra, A.; Khales, H.; Khrenov, B. A.; Kim, Jeong-Sook; Kim, Soon-Wook; Kim, Sug-Whan; Kleifges, M.; Klimov, P. A.; Kolev, D.; Kreykenbohm, I.; Kudela, K.; Kurihara, Y.; Kusenko, A.; Kuznetsov, E.; Lacombe, M.; Lachaud, C.; Lahmar, H.; Lakhdari, F.; Larsson, O.; Lee, J.; Licandro, J.; Lim, H.; López Campano, L.; Maccarone, M. C.; Mackovjak, S.; Mahdi, M.; Maravilla, D.; Marcelli, L.; Marcos, J. L.; Marini, A.; Martens, K.; Martín, Y.; Martinez, O.; Masciantonio, G.; Mase, K.; Matev, R.; Matthews, J. N.; Mebarki, N.; Medina-Tanco, G.; Mehrad, L.; Mendoza, M. A.; Merino, A.; Mernik, T.; Meseguer, J.; Messaoud, S.; Micu, O.; Mimouni, J.; Miyamoto, H.; Miyazaki, Y.; Mizumoto, Y.; Modestino, G.; Monaco, A.; Monnier-Ragaigne, D.; Morales de los Ríos, J. A.; Moretto, C.; Morozenko, V. S.; Mot, B.; Murakami, T.; Nadji, B.; Nagano, M.; Nagata, M.; Nagataki, S.; Nakamura, T.; Napolitano, T.; Nardelli, A.; Naumov, D.; Nava, R.; Neronov, A.; Nomoto, K.; Nonaka, T.; Ogawa, T.; Ogio, S.; Ohmori, H.; Olinto, A. V.; Orleański, P.; Osteria, G.; Painter, W.; Panasyuk, M. I.; Panico, B.; Parizot, E.; Park, I. H.; Park, H. W.; Pastircak, B.; Patzak, T.; Paul, T.; Pennypacker, C.; Perdichizzi, M.; Pérez-Grande, I.; Perfetto, F.; Peter, T.; Picozza, P.; Pierog, T.; Pindado, S.; Piotrowski, L. W.; Piraino, S.; Placidi, L.; Plebaniak, Z.; Pliego, S.; Pollini, A.; Popescu, E. M.; Prat, P.; Prévôt, G.; Prieto, H.; Putis, M.; Rabanal, J.; Radu, A. A.; Rahmani, M.; Reardon, P.; Reyes, M.; Rezazadeh, M.; Ricci, M.; Rodríguez Frías, M. D.; Ronga, F.; Roth, M.; Rothkaehl, H.; Roudil, G.; Rusinov, I.; Rybczyński, M.; Sabau, M. D.; Sáez Cano, G.; Sagawa, H.; Sahnoune, Z.; Saito, A.; Sakaki, N.; Sakata, M.; Salazar, H.; Sanchez, J. C.; Sánchez, J. L.; Santangelo, A.; Santiago Crúz, L.; Sanz-Andrés, A.; Sanz Palomino, M.; Saprykin, O.; Sarazin, F.; Sato, H.; Sato, M.; Schanz, T.; Schieler, H.; Scotti, V.; Segreto, A.; Selmane, S.; Semikoz, D.; Serra, M.; Sharakin, S.; Shibata, T.; Shimizu, H. M.; Shinozaki, K.; Shirahama, T.; Siemieniec-Oziȩbło, G.; Sledd, J.; Słomińska, K.; Sobey, A.; Stan, I.; Sugiyama, T.; Supanitsky, D.; Suzuki, M.; Szabelska, B.; Szabelski, J.; Tahi, H.; Tajima, F.; Tajima, N.; Tajima, T.; Takahashi, Y.; Takami, H.; Takeda, M.; Takizawa, Y.; Talai, M. C.; Tenzer, C.; Tibolla, O.; Tkachev, L.; Tokuno, H.; Tomida, T.; Tone, N.; Toscano, S.; Traïche, M.; Tsenov, R.; Tsunesada, Y.; Tsuno, K.; Tymieniecka, T.; Uchihori, Y.; Unger, M.; Vaduvescu, O.; Valdés-Galicia, J. F.; Vallania, P.; Vankova, G.; Vigorito, C.; Villaseñor, L.; Vlcek, B.; von Ballmoos, P.; Vrabel, M.; Wada, S.; Watanabe, J.; Watanabe, S.; Watts, J., Jr.; Weber, M.; Weigand Muñoz, R.; Weindl, A.; Weiler, T. J.; Wibig, T.; Wiencke, L.; Wille, M.; Wilms, J.; Włodarczyk, Z.; Yamamoto, T.; Yamamoto, Y.; Yang, J.; Yano, H.; Yashin, I. V.; Yonetoku, D.; Yoshida, S.; Young, R.; Zgura, I. S.; Zotov, M. Yu.; Zuccaro Marchi, A.
2017-09-01
We summarize the state of the art of a program of UV observations from space of meteor phenomena, a secondary objective of the JEM-EUSO international collaboration. Our preliminary analysis indicates that JEM-EUSO, taking advantage of its large FOV and good sensitivity, should be able to detect meteors down to absolute magnitude close to 7. This means that JEM-EUSO should be able to record a statistically significant flux of meteors, including both sporadic ones, and events produced by different meteor streams. Being unaffected by adverse weather conditions, JEM-EUSO can also be a very important facility for the detection of bright meteors and fireballs, as these events can be detected even in conditions of very high sky background. In the case of bright events, moreover, exhibiting some persistence of the meteor train, preliminary simulations show that it should be possible to exploit the motion of the ISS itself and derive at least a rough 3D reconstruction of the meteor trajectory. Moreover, the observing strategy developed to detect meteors may also be applied to the detection of nuclearites, exotic particles whose existence has been suggested by some theoretical investigations. Nuclearites are expected to move at higher velocities than meteoroids, and to exhibit a wider range of possible trajectories, including particles moving upward after crossing the Earth. Some pilot studies, including the approved Mini-EUSO mission, a precursor of JEM-EUSO, are currently operational or in preparation. We are doing simulations to assess the performance of Mini-EUSO for meteor studies, while a few meteor events have been already detected using the ground-based facility EUSO-TA.
Pursuit tracks chase: exploring the role of eye movements in the detection of chasing
Träuble, Birgit
2015-01-01
We explore the role of eye movements in a chase detection task. Unlike the previous studies, which focused on overall performance as indicated by response speed and chase detection accuracy, we decompose the search process into gaze events such as smooth eye movements and use a data-driven approach to separately describe these gaze events. We measured eye movements of four human subjects engaged in a chase detection task displayed on a computer screen. The subjects were asked to detect two chasing rings among twelve other randomly moving rings. Using principal component analysis and support vector machines, we looked at the template and classification images that describe various stages of the detection process. We showed that the subjects mostly search for pairs of rings that move one after another in the same direction with a distance of 3.5–3.8 degrees. To find such pairs, the subjects first looked for regions with a high ring density and then pursued the rings in this region. Most of these groups consisted of two rings. Three subjects preferred to pursue the pair as a single object, while the remaining subject pursued the group by alternating the gaze between the two individual rings. In the discussion, we argue that subjects do not compare the movement of the pursued pair to a singular preformed template that describes a chasing motion. Rather, subjects bring certain hypotheses about what motion may qualify as chase and then, through feedback, they learn to look for a motion pattern that maximizes their performance. PMID:26401454
A Neuromorphic Architecture for Object Recognition and Motion Anticipation Using Burst-STDP
Balduzzi, David; Tononi, Giulio
2012-01-01
In this work we investigate the possibilities offered by a minimal framework of artificial spiking neurons to be deployed in silico. Here we introduce a hierarchical network architecture of spiking neurons which learns to recognize moving objects in a visual environment and determine the correct motor output for each object. These tasks are learned through both supervised and unsupervised spike timing dependent plasticity (STDP). STDP is responsible for the strengthening (or weakening) of synapses in relation to pre- and post-synaptic spike times and has been described as a Hebbian paradigm taking place both in vitro and in vivo. We utilize a variation of STDP learning, called burst-STDP, which is based on the notion that, since spikes are expensive in terms of energy consumption, then strong bursting activity carries more information than single (sparse) spikes. Furthermore, this learning algorithm takes advantage of homeostatic renormalization, which has been hypothesized to promote memory consolidation during NREM sleep. Using this learning rule, we design a spiking neural network architecture capable of object recognition, motion detection, attention towards important objects, and motor control outputs. We demonstrate the abilities of our design in a simple environment with distractor objects, multiple objects moving concurrently, and in the presence of noise. Most importantly, we show how this neural network is capable of performing these tasks using a simple leaky-integrate-and-fire (LIF) neuron model with binary synapses, making it fully compatible with state-of-the-art digital neuromorphic hardware designs. As such, the building blocks and learning rules presented in this paper appear promising for scalable fully neuromorphic systems to be implemented in hardware chips. PMID:22615855
Memory-Based Multiagent Coevolution Modeling for Robust Moving Object Tracking
Wang, Yanjiang; Qi, Yujuan; Li, Yongping
2013-01-01
The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods. PMID:23843739
Memory-based multiagent coevolution modeling for robust moving object tracking.
Wang, Yanjiang; Qi, Yujuan; Li, Yongping
2013-01-01
The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods.
NASA Astrophysics Data System (ADS)
Dong, Xiabin; Huang, Xinsheng; Zheng, Yongbin; Bai, Shengjian; Xu, Wanying
2014-07-01
Infrared moving target detection is an important part of infrared technology. We introduce a novel infrared small moving target detection method based on tracking interest points under complicated background. Firstly, Difference of Gaussians (DOG) filters are used to detect a group of interest points (including the moving targets). Secondly, a sort of small targets tracking method inspired by Human Visual System (HVS) is used to track these interest points for several frames, and then the correlations between interest points in the first frame and the last frame are obtained. Last, a new clustering method named as R-means is proposed to divide these interest points into two groups according to the correlations, one is target points and another is background points. In experimental results, the target-to-clutter ratio (TCR) and the receiver operating characteristics (ROC) curves are computed experimentally to compare the performances of the proposed method and other five sophisticated methods. From the results, the proposed method shows a better discrimination of targets and clutters and has a lower false alarm rate than the existing moving target detection methods.
Depth-Based Detection of Standing-Pigs in Moving Noise Environments.
Kim, Jinseong; Chung, Yeonwoo; Choi, Younchang; Sa, Jaewon; Kim, Heegon; Chung, Yongwha; Park, Daihee; Kim, Hakjae
2017-11-29
In a surveillance camera environment, the detection of standing-pigs in real-time is an important issue towards the final goal of 24-h tracking of individual pigs. In this study, we focus on depth-based detection of standing-pigs with "moving noises", which appear every night in a commercial pig farm, but have not been reported yet. We first apply a spatiotemporal interpolation technique to remove the moving noises occurring in the depth images. Then, we detect the standing-pigs by utilizing the undefined depth values around them. Our experimental results show that this method is effective for detecting standing-pigs at night, in terms of both cost-effectiveness (using a low-cost Kinect depth sensor) and accuracy (i.e., 94.47%), even with severe moving noises occluding up to half of an input depth image. Furthermore, without any time-consuming technique, the proposed method can be executed in real-time.
NASA Technical Reports Server (NTRS)
Arndt, G. Dickey (Inventor); Carl, James R. (Inventor)
2000-01-01
A portable system is provided that is operational for determining, with three dimensional resolution, the position of a buried object or approximately positioned object that may move in space or air or gas. The system has a plurality of receivers for detecting the signal front a target antenna and measuring the phase thereof with respect to a reference signal. The relative permittivity and conductivity of the medium in which the object is located is used along with the measured phase signal to determine a distance between the object and each of the plurality of receivers. Knowing these distances. an iteration technique is provided for solving equations simultaneously to provide position coordinates. The system may also be used for tracking movement of an object within close range of the system by sampling and recording subsequent position of the object. A dipole target antenna. when positioned adjacent to a buried object, may be energized using a separate transmitter which couples energy to the target antenna through the medium. The target antenna then preferably resonates at a different frequency, such as a second harmonic of the transmitter frequency.
Method and apparatus for hybrid position/force control of multi-arm cooperating robots
NASA Technical Reports Server (NTRS)
Hayati, Samad A. (Inventor)
1989-01-01
Two or more robotic arms having end effectors rigidly attached to an object to be moved are disclosed. A hybrid position/force control system is provided for driving each of the robotic arms. The object to be moved is represented as having a total mass that consists of the actual mass of the object to be moved plus the mass of the moveable arms that are rigidly attached to the moveable object. The arms are driven in a positive way by the hybrid control system to assure that each arm shares in the position/force applied to the object. The burden of actuation is shared by each arm in a non-conflicting way as the arm independently control the position of, and force upon, a designated point on the object.
NASA Astrophysics Data System (ADS)
Scopatz, Stephen D.; Mendez, Michael; Trent, Randall
2015-05-01
The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.
Detecting Moving Sources in Astronomical Images (Abstract)
NASA Astrophysics Data System (ADS)
Block, A.
2018-06-01
(Abstract only) Source detection in images is an important part of analyzing astronomical data. This project discusses an implementation of image detection in python, as well as processes for performing photometry in python. Application of these tools to looking for moving sources is also discussed.
NASA Technical Reports Server (NTRS)
Muller, Richard E. (Inventor); Mouroulis, Pantazis Z. (Inventor); Maker, Paul D. (Inventor); Wilson, Daniel W. (Inventor)
2003-01-01
The optical system of this invention is an unique type of imaging spectrometer, i.e. an instrument that can determine the spectra of all points in a two-dimensional scene. The general type of imaging spectrometer under which this invention falls has been termed a computed-tomography imaging spectrometer (CTIS). CTIS's have the ability to perform spectral imaging of scenes containing rapidly moving objects or evolving features, hereafter referred to as transient scenes. This invention, a reflective CTIS with an unique two-dimensional reflective grating, can operate in any wavelength band from the ultraviolet through long-wave infrared. Although this spectrometer is especially useful for rapidly occurring events it is also useful for investigation of some slow moving phenomena as in the life sciences.
Early Knowledge of Object Motion: Continuity and Inertia.
ERIC Educational Resources Information Center
Spelke, Elizabeth; And Others
1994-01-01
Investigated whether infants infer that a hidden, freely moving object will move continuously and smoothly. Six- to 10- month olds inferred that the object's path would be connected and unobstructed, in accord with continuity. Younger infants did not infer this, in accord with inertia. At 8 and 10 months, knowledge of inertia emerged but remained…
Real time automated inspection
Fant, Karl M.; Fundakowski, Richard A.; Levitt, Tod S.; Overland, John E.; Suresh, Bindinganavle R.; Ulrich, Franz W.
1985-01-01
A method and apparatus relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges are segmented out by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections.
Method and apparatus for predicting the direction of movement in machine vision
NASA Technical Reports Server (NTRS)
Lawton, Teri B. (Inventor)
1992-01-01
A computer-simulated cortical network is presented. The network is capable of computing the visibility of shifts in the direction of movement. Additionally, the network can compute the following: (1) the magnitude of the position difference between the test and background patterns; (2) localized contrast differences at different spatial scales analyzed by computing temporal gradients of the difference and sum of the outputs of paired even- and odd-symmetric bandpass filters convolved with the input pattern; and (3) the direction of a test pattern moved relative to a textured background. The direction of movement of an object in the field of view of a robotic vision system is detected in accordance with nonlinear Gabor function algorithms. The movement of objects relative to their background is used to infer the 3-dimensional structure and motion of object surfaces.
Object acquisition and tracking for space-based surveillance
NASA Astrophysics Data System (ADS)
1991-11-01
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase 1) and N00014-89-C-0015 (Phase 2). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processing into time dependent, object dependent, and data dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.
Object acquisition and tracking for space-based surveillance. Final report, Dec 88-May 90
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-11-27
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase I) and N00014-89-C-0015 (Phase II). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processingmore » into time dependent, object-dependent, and data-dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.« less
Examining the nature of very-high-energy gamma-ray emission from the AGN PKS 1222+216 and 3C 279
NASA Astrophysics Data System (ADS)
Price, Sharleen; Brill, Ari; Mukherjee, Reshmi; VERITAS
2018-01-01
Blazars are a type of active galactic nuclei (AGN) that emit jets of ionized matter which move towards the Earth at relativistic speeds. In this research we carried out a study of two objects, 3C 279 and PKS 1222+216, which belong to the subset of blazars known as FSRQs (flat spectrum radio quasars), the most powerful TeV-detected sources at gamma-ray energies with bolometric luminosities exceeding 1048 erg/s. The high-energy emission of quasars peaks in the MeV-GeV band, making these sources very rarely detectable in the TeV energy range. In fact, only six FSRQs have ever been detected in this range by very-high-energy gamma-ray telescopes. We will present results from observing campaigns on 3C 279 in 2014 and 2016, when the object was detected in high flux states by Fermi-LAT. Observations include simultaneous coverage with the Fermi-LAT satellite and the VERITAS ground-based array spanning four decades in energy from 100 MeV to 1 TeV. We will also report VERITAS observations of PKS 1222+216 between 2008 and 2017. The detection/non-detection of TeV emission during flaring episodes at MeV energies will further contribute to our understanding of particle acceleration and gamma-ray emission mechanisms in blazar jets.
Acoustic system for material transport
NASA Technical Reports Server (NTRS)
Barmatz, M. B.; Trinh, E. H.; Wang, T. G.; Elleman, D. D.; Jacobi, N. (Inventor)
1983-01-01
An object within a chamber is acoustically moved by applying wavelengths of different modes to the chamber to move the object between pressure wells formed by the modes. In one system, the object is placed in one end of the chamber while a resonant mode, applied along the length of the chamber, produces a pressure well at the location. The frequency is then switched to a second mode that produces a pressure well at the center of the chamber, to draw the object. When the object reaches the second pressure well and is still traveling towards the second end of the chamber, the acoustic frequency is again shifted to a third mode (which may equal the first model) that has a pressure well in the second end portion of the chamber, to draw the object. A heat source may be located near the second end of the chamber to heat the sample, and after the sample is heated it can be cooled by moving it in a corresponding manner back to the first end of the chamber. The transducers for levitating and moving the object may be all located at the cool first end of the chamber.
Schoenemann, Brigitte; Castellani, Christopher; Clarkson, Euan N. K.; Haug, Joachim T.; Maas, Andreas; Haug, Carolin; Waloszek, Dieter
2012-01-01
Fossilized compound eyes from the Cambrian, isolated and three-dimensionally preserved, provide remarkable insights into the lifestyle and habitat of their owners. The tiny stalked compound eyes described here probably possessed too few facets to form a proper image, but they represent a sophisticated system for detecting moving objects. The eyes are preserved as almost solid, mace-shaped blocks of phosphate, in which the original positions of the rhabdoms in one specimen are retained as deep cavities. Analysis of the optical axes reveals four visual areas, each with different properties in acuity of vision. They are surveyed by lenses directed forwards, laterally, backwards and inwards, respectively. The most intriguing of these is the putatively inwardly orientated zone, where the optical axes, like those orientated to the front, interfere with axes of the other eye of the contralateral side. The result is a three-dimensional visual net that covers not only the front, but extends also far laterally to either side. Thus, a moving object could be perceived by a two-dimensional coordinate (which is formed by two axes of those facets, one of the left and one of the right eye, which are orientated towards the moving object) in a wide three-dimensional space. This compound eye system enables small arthropods equipped with an eye of low acuity to estimate velocity, size or distance of possible food items efficiently. The eyes are interpreted as having been derived from individuals of the early crustacean Henningsmoenicaris scutula pointing to the existence of highly efficiently developed eyes in the early evolutionary lineage leading towards the modern Crustacea. PMID:22048954
Visual context modulates potentiation of grasp types during semantic object categorization.
Kalénine, Solène; Shapiro, Allison D; Flumini, Andrea; Borghi, Anna M; Buxbaum, Laurel J
2014-06-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use-compatible, as compared with move-compatible, contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions.
Visual context modulates potentiation of grasp types during semantic object categorization
Kalénine, Solène; Shapiro, Allison D.; Flumini, Andrea; Borghi, Anna M.; Buxbaum, Laurel J.
2013-01-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it, but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use- compared to move-compatible contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions. PMID:24186270
Warren, Paul A; Rushton, Simon K
2009-05-01
We have recently suggested that the brain uses its sensitivity to optic flow in order to parse retinal motion into components arising due to self and object movement (e.g. Rushton, S. K., & Warren, P. A. (2005). Moving observers, 3D relative motion and the detection of object movement. Current Biology, 15, R542-R543). Here, we explore whether stereo disparity is necessary for flow parsing or whether other sources of depth information, which could theoretically constrain flow-field interpretation, are sufficient. Stationary observers viewed large field of view stimuli containing textured cubes, moving in a manner that was consistent with a complex observer movement through a stationary scene. Observers made speeded responses to report the perceived direction of movement of a probe object presented at different depths in the scene. Across conditions we varied the presence or absence of different binocular and monocular cues to depth order. In line with previous studies, results consistent with flow parsing (in terms of both perceived direction and response time) were found in the condition in which motion parallax and stereoscopic disparity were present. Observers were poorer at judging object movement when depth order was specified by parallax alone. However, as more monocular depth cues were added to the stimulus the results approached those found when the scene contained stereoscopic cues. We conclude that both monocular and binocular static depth information contribute to flow parsing. These findings are discussed in the context of potential architectures for a model of the flow parsing mechanism.
Algorithm architecture co-design for ultra low-power image sensor
NASA Astrophysics Data System (ADS)
Laforest, T.; Dupret, A.; Verdant, A.; Lattard, D.; Villard, P.
2012-03-01
In a context of embedded video surveillance, stand alone leftbehind image sensors are used to detect events with high level of confidence, but also with a very low power consumption. Using a steady camera, motion detection algorithms based on background estimation to find regions in movement are simple to implement and computationally efficient. To reduce power consumption, the background is estimated using a down sampled image formed of macropixels. In order to extend the class of moving objects to be detected, we propose an original mixed mode architecture developed thanks to an algorithm architecture co-design methodology. This programmable architecture is composed of a vector of SIMD processors. A basic RISC architecture was optimized in order to implement motion detection algorithms with a dedicated set of 42 instructions. Definition of delta modulation as a calculation primitive has allowed to implement algorithms in a very compact way. Thereby, a 1920x1080@25fps CMOS image sensor performing integrated motion detection is proposed with a power estimation of 1.8 mW.
Near-infrared imaging spectroscopy for counterfeit drug detection
NASA Astrophysics Data System (ADS)
Arnold, Thomas; De Biasio, Martin; Leitner, Raimund
2011-06-01
Pharmaceutical counterfeiting is a significant issue in the healthcare community as well as for the pharmaceutical industry worldwide. The use of counterfeit medicines can result in treatment failure or even death. A rapid screening technique such as near infrared (NIR) spectroscopy could aid in the search for and identification of counterfeit drugs. This work presents a comparison of two laboratory NIR imaging systems and the chemometric analysis of the acquired spectroscopic image data. The first imaging system utilizes a NIR liquid crystal tuneable filter and is designed for the investigation of stationary objects. The second imaging system utilizes a NIR imaging spectrograph and is designed for the fast analysis of moving objects on a conveyor belt. Several drugs in form of tablets and capsules were analyzed. Spectral unmixing techniques were applied to the mixed reflectance spectra to identify constituent parts of the investigated drugs. The results show that NIR spectroscopic imaging can be used for contact-less detection and identification of a variety of counterfeit drugs.
Towards photometry pipeline of the Indonesian space surveillance system
NASA Astrophysics Data System (ADS)
Priyatikanto, Rhorom; Religia, Bahar; Rachman, Abdul; Dani, Tiar
2015-09-01
Optical observation through sub-meter telescope equipped with CCD camera becomes alternative method for increasing orbital debris detection and surveillance. This observational mode is expected to eye medium-sized objects in higher orbits (e.g. MEO, GTO, GSO & GEO), beyond the reach of usual radar system. However, such observation of fast moving objects demands special treatment and analysis technique. In this study, we performed photometric analysis of the satellite track images photographed using rehabilitated Schmidt Bima Sakti telescope in Bosscha Observatory. The Hough transformation was implemented to automatically detect linear streak from the images. From this analysis and comparison to USSPACECOM catalog, two satellites were identified and associated with inactive Thuraya-3 satellite and Satcom-3 debris which are located at geostationary orbit. Further aperture photometry analysis revealed the periodicity of tumbling Satcom-3 debris. In the near future, it is not impossible to apply similar scheme to establish an analysis pipeline for optical space surveillance system hosted in Indonesia.
ERIC Educational Resources Information Center
Young, Timothy; Guy, Mark
2011-01-01
Students have a difficult time understanding force, especially when dealing with a moving object. Many forces can be acting on an object at the same time, causing it to stay in one place or move. By directly observing these forces, students can better understand the effect these forces have on an object. With a simple, student-built device called…
Meghdadi, Amir H; Irani, Pourang
2013-12-01
We propose a novel video visual analytics system for interactive exploration of surveillance video data. Our approach consists of providing analysts with various views of information related to moving objects in a video. To do this we first extract each object's movement path. We visualize each movement by (a) creating a single action shot image (a still image that coalesces multiple frames), (b) plotting its trajectory in a space-time cube and (c) displaying an overall timeline view of all the movements. The action shots provide a still view of the moving object while the path view presents movement properties such as speed and location. We also provide tools for spatial and temporal filtering based on regions of interest. This allows analysts to filter out large amounts of movement activities while the action shot representation summarizes the content of each movement. We incorporated this multi-part visual representation of moving objects in sViSIT, a tool to facilitate browsing through the video content by interactive querying and retrieval of data. Based on our interaction with security personnel who routinely interact with surveillance video data, we identified some of the most common tasks performed. This resulted in designing a user study to measure time-to-completion of the various tasks. These generally required searching for specific events of interest (targets) in videos. Fourteen different tasks were designed and a total of 120 min of surveillance video were recorded (indoor and outdoor locations recording movements of people and vehicles). The time-to-completion of these tasks were compared against a manual fast forward video browsing guided with movement detection. We demonstrate how our system can facilitate lengthy video exploration and significantly reduce browsing time to find events of interest. Reports from expert users identify positive aspects of our approach which we summarize in our recommendations for future video visual analytics systems.
The Large Synoptic Survey Telescope as a Near-Earth Object discovery machine
NASA Astrophysics Data System (ADS)
Jones, R. Lynne; Slater, Colin T.; Moeyens, Joachim; Allen, Lori; Axelrod, Tim; Cook, Kem; Ivezić, Željko; Jurić, Mario; Myers, Jonathan; Petry, Catherine E.
2018-03-01
Using the most recent prototypes, design, and as-built system information, we test and quantify the capability of the Large Synoptic Survey Telescope (LSST) to discover Potentially Hazardous Asteroids (PHAs) and Near-Earth Objects (NEOs). We empirically estimate an expected upper limit to the false detection rate in LSST image differencing, using measurements on DECam data and prototype LSST software and find it to be about 450 deg-2. We show that this rate is already tractable with current prototype of the LSST Moving Object Processing System (MOPS) by processing a 30-day simulation consistent with measured false detection rates. We proceed to evaluate the performance of the LSST baseline survey strategy for PHAs and NEOs using a high-fidelity simulated survey pointing history. We find that LSST alone, using its baseline survey strategy, will detect 66% of the PHA and 61% of the NEO population objects brighter than H = 22 , with the uncertainty in the estimate of ± 5 percentage points. By generating and examining variations on the baseline survey strategy, we show it is possible to further improve the discovery yields. In particular, we find that extending the LSST survey by two additional years and doubling the MOPS search window increases the completeness for PHAs to 86% (including those discovered by contemporaneous surveys) without jeopardizing other LSST science goals (77% for NEOs). This equates to reducing the undiscovered population of PHAs by additional 26% (15% for NEOs), relative to the baseline survey.
3D shape measurement of moving object with FFT-based spatial matching
NASA Astrophysics Data System (ADS)
Guo, Qinghua; Ruan, Yuxi; Xi, Jiangtao; Song, Limei; Zhu, Xinjun; Yu, Yanguang; Tong, Jun
2018-03-01
This work presents a new technique for 3D shape measurement of moving object in translational motion, which finds applications in online inspection, quality control, etc. A low-complexity 1D fast Fourier transform (FFT)-based spatial matching approach is devised to obtain accurate object displacement estimates, and it is combined with single shot fringe pattern prolometry (FPP) techniques to achieve high measurement performance with multiple captured images through coherent combining. The proposed technique overcomes some limitations of existing ones. Specifically, the placement of marks on object surface and synchronization between projector and camera are not needed, the velocity of the moving object is not required to be constant, and there is no restriction on the movement trajectory. Both simulation and experimental results demonstrate the effectiveness of the proposed technique.
Moving target detection method based on improved Gaussian mixture model
NASA Astrophysics Data System (ADS)
Ma, J. Y.; Jie, F. R.; Hu, Y. J.
2017-07-01
Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.
Error analysis of motion correction method for laser scanning of moving objects
NASA Astrophysics Data System (ADS)
Goel, S.; Lohani, B.
2014-05-01
The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.
Some characteristics of optokinetic eye-movement patterns : a comparative study.
DOT National Transportation Integrated Search
1970-07-01
Long-associated with transportation ('railroad nystagmus'), optokinetic (OPK) nystagmus is an eye-movement reaction which occurs when a series of moving objects crosses the visual field or when an observer moves past a series of objects. Similar cont...
Tracking Object Existence From an Autonomous Patrol Vehicle
NASA Technical Reports Server (NTRS)
Wolf, Michael; Scharenbroich, Lucas
2011-01-01
An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the uncertainty arising from errors in sensors and upstream processes. However, traditional target tracking methods typically assume a stationary detection volume of interest, whereas in this case, one must make adjustments for being able to see only a small portion of the region of interest and understand when an alert situation has occurred. To track object existence inside and outside the vehicle's sensor range, a probability of existence was defined for each hypothesized object, and this value was updated at every time step in a Bayesian manner based on expected characteristics of the sensor and object and whether that object has been detected in the most recent time step. Then, this value feeds into a sequential probability ratio test (SPRT) to determine the status of the object (suspected, confirmed, or deleted). Alerts are sent upon selected status transitions. Additionally, in order to track objects that move in and out of sensor range and update the probability of existence appropriately a variable probability detection has been defined and the hypothesis probability equations have been re-derived to accommodate this change. Unsupervised object tracking is a pervasive issue in automated perception systems. This work could apply to any mobile platform (ground vehicle, sea vessel, air vehicle, or orbiter) that intermittently revisits regions of interest and needs to determine whether anything interesting has changed.
Effects of sport expertise on representational momentum during timing control.
Nakamoto, Hiroki; Mori, Shiro; Ikudome, Sachi; Unenaka, Satoshi; Imanaka, Kuniyasu
2015-04-01
Sports involving fast visual perception require players to compensate for delays in neural processing of visual information. Memory for the final position of a moving object is distorted forward along its path of motion (i.e., "representational momentum," RM). This cognitive extrapolation of visual perception might compensate for the neural delay in interacting appropriately with a moving object. The present study examined whether experienced batters cognitively extrapolate the location of a fast-moving object and whether this extrapolation is associated with coincident timing control. Nine expert and nine novice baseball players performed a prediction motion task in which a target moved from one end of a straight 400-cm track at a constant velocity. In half of the trials, vision was suddenly occluded when the target reached the 200-cm point (occlusion condition). Participants had to press a button concurrently with the target arrival at the end of the track and verbally report their subjective assessment of the first target-occluded position. Experts showed larger RM magnitude (cognitive extrapolation) than did novices in the occlusion condition. RM magnitude and timing errors were strongly correlated in the fast velocity condition in both experts and novices, whereas in the slow velocity condition, a significant correlation appeared only in experts. This suggests that experts can cognitively extrapolate the location of a moving object according to their anticipation and, as a result, potentially circumvent neural processing delays. This process might be used to control response timing when interacting with moving objects.
Behavior Knowledge Space-Based Fusion for Copy-Move Forgery Detection.
Ferreira, Anselmo; Felipussi, Siovani C; Alfaro, Carlos; Fonseca, Pablo; Vargas-Munoz, John E; Dos Santos, Jefersson A; Rocha, Anderson
2016-07-20
The detection of copy-move image tampering is of paramount importance nowadays, mainly due to its potential use for misleading the opinion forming process of the general public. In this paper, we go beyond traditional forgery detectors and aim at combining different properties of copy-move detection approaches by modeling the problem on a multiscale behavior knowledge space, which encodes the output combinations of different techniques as a priori probabilities considering multiple scales of the training data. Afterwards, the conditional probabilities missing entries are properly estimated through generative models applied on the existing training data. Finally, we propose different techniques that exploit the multi-directionality of the data to generate the final outcome detection map in a machine learning decision-making fashion. Experimental results on complex datasets, comparing the proposed techniques with a gamut of copy-move detection approaches and other fusion methodologies in the literature show the effectiveness of the proposed method and its suitability for real-world applications.
Copy-move forgery detection utilizing Fourier-Mellin transform log-polar features
NASA Astrophysics Data System (ADS)
Dixit, Rahul; Naskar, Ruchira
2018-03-01
In this work, we address the problem of region duplication or copy-move forgery detection in digital images, along with detection of geometric transforms (rotation and rescale) and postprocessing-based attacks (noise, blur, and brightness adjustment). Detection of region duplication, following conventional techniques, becomes more challenging when an intelligent adversary brings about such additional transforms on the duplicated regions. In this work, we utilize Fourier-Mellin transform with log-polar mapping and a color-based segmentation technique using K-means clustering, which help us to achieve invariance to all the above forms of attacks in copy-move forgery detection of digital images. Our experimental results prove the efficiency of the proposed method and its superiority to the current state of the art.
2010-01-01
target kinematics for multiple sensor detections is referred to as the track - before - detect strategy, and is commonly adopted in multi-sensor surveillance...of moving targets. Wettergren [4] presented an application of track - before - detect strategies to undersea distributed sensor networks. In de- signing...the deployment of a distributed passive sensor network that employs this track - before - detect procedure, it is impera- tive that the placement of
Image-based tracking: a new emerging standard
NASA Astrophysics Data System (ADS)
Antonisse, Jim; Randall, Scott
2012-06-01
Automated moving object detection and tracking are increasingly viewed as solutions to the enormous data volumes resulting from emerging wide-area persistent surveillance systems. In a previous paper we described a Motion Imagery Standards Board (MISB) initiative to help address this problem: the specification of a micro-architecture for the automatic extraction of motion indicators and tracks. This paper reports on the development of an extended specification of the plug-and-play tracking micro-architecture, on its status as an emerging standard across DoD, the Intelligence Community, and NATO.
Observation of GEO Satellite Above Thailand’s Sky
NASA Astrophysics Data System (ADS)
Kasonsuwan, K.; Wannawichian, S.; Kirdkao, T.
2017-09-01
The direct observations of Geostationary Orbit (GEO) satellites above Thailand’s sky by 0.7-meters telescope were proceeded at Inthanon Mt., Chiang Mai, Thailand. The observation took place at night with Sidereal Stare Mode (SSM). With this observing mode, the moving object will appear as a streak. The star identification for image calibration is based on (1) a star catalogue, (2) the streak detection of the satellite using the software and (3) the extraction of the celestial coordinate of the satellite as a predicted position. Finally, the orbital elements for GEO satellites were calculated.
NASA Astrophysics Data System (ADS)
Giblin, Jay P.; Dixon, John; Dupuis, Julia R.; Cosofret, Bogdan R.; Marinelli, William J.
2017-05-01
Sensor technologies capable of detecting low vapor pressure liquid surface contaminants, as well as solids, in a noncontact fashion while on-the-move continues to be an important need for the U.S. Army. In this paper, we discuss the development of a long-wave infrared (LWIR, 8-10.5 μm) spatial heterodyne spectrometer coupled with an LWIR illuminator and an automated detection algorithm for detection of surface contaminants from a moving vehicle. The system is designed to detect surface contaminants by repetitively collecting LWIR reflectance spectra of the ground. Detection and identification of surface contaminants is based on spectral correlation of the measured LWIR ground reflectance spectra with high fidelity library spectra and the system's cumulative binary detection response from the sampled ground. We present the concepts of the detection algorithm through a discussion of the system signal model. In addition, we present reflectance spectra of surfaces contaminated with a liquid CWA simulant, triethyl phosphate (TEP), and a solid simulant, acetaminophen acquired while the sensor was stationary and on-the-move. Surfaces included CARC painted steel, asphalt, concrete, and sand. The data collected was analyzed to determine the probability of detecting 800 μm diameter contaminant particles at a 0.5 g/m2 areal density with the SHSCAD traversing a surface.
Static latching arrangement and method
Morrison, Larry
1988-01-01
A latching assembly for use in latching a cable to and unlatching it from a given object in order to move an object from one location to another is disclosed herein. This assembly includes a weighted sphere mounted to one end of a cable so as to rotate about a specific diameter of the sphere. The assembly also includes a static latch adapted for connection with the object to be moved. This latch includes an internal latching cavity for containing the sphere in a latching condition and a series of surfaces and openings which cooperate with the sphere in order to move the sphere into and out of the latching cavity and thereby connect the cable to and disconnect it from the latch without using any moving parts on the latch itself.
Diehl, Robert H.; Valdez, Ernest W.; Preston, Todd M.; Wellik, Mike J.; Cryan, Paul
2016-01-01
Solar power towers produce electrical energy from sunlight at an industrial scale. Little is known about the effects of this technology on flying animals and few methods exist for automatically detecting or observing wildlife at solar towers and other tall anthropogenic structures. Smoking objects are sometimes observed co-occurring with reflected, concentrated light (“solar flux”) in the airspace around solar towers, but the identity and origins of such objects can be difficult to determine. In this observational pilot study at the world’s largest solar tower facility, we assessed the efficacy of using radar, surveillance video, and insect trapping to detect and observe animals flying near the towers. During site visits in May and September 2014, we monitored the airspace surrounding towers and observed insects, birds, and bats under a variety of environmental and operational conditions. We detected and broadly differentiated animals or objects moving through the airspace generally using radar and near solar towers using several video imaging methods. Video revealed what appeared to be mostly small insects burning in the solar flux. Also, we occasionally detected birds flying in the solar flux but could not accurately identify birds to species or the types of insects and small objects composing the vast majority of smoking targets. Insect trapping on the ground was somewhat effective at sampling smaller insects around the tower, and presence and abundance of insects in the traps generally trended with radar and video observations. Traps did not tend to sample the larger insects we sometimes observed flying in the solar flux or found dead on the ground beneath the towers. Some of the methods we tested (e.g., video surveillance) could be further assessed and potentially used to automatically detect and observe flying animals in the vicinity of solar towers to advance understanding about their effects on wildlife.
Diehl, Robert H; Valdez, Ernest W; Preston, Todd M; Wellik, Michael J; Cryan, Paul M
2016-01-01
Solar power towers produce electrical energy from sunlight at an industrial scale. Little is known about the effects of this technology on flying animals and few methods exist for automatically detecting or observing wildlife at solar towers and other tall anthropogenic structures. Smoking objects are sometimes observed co-occurring with reflected, concentrated light ("solar flux") in the airspace around solar towers, but the identity and origins of such objects can be difficult to determine. In this observational pilot study at the world's largest solar tower facility, we assessed the efficacy of using radar, surveillance video, and insect trapping to detect and observe animals flying near the towers. During site visits in May and September 2014, we monitored the airspace surrounding towers and observed insects, birds, and bats under a variety of environmental and operational conditions. We detected and broadly differentiated animals or objects moving through the airspace generally using radar and near solar towers using several video imaging methods. Video revealed what appeared to be mostly small insects burning in the solar flux. Also, we occasionally detected birds flying in the solar flux but could not accurately identify birds to species or the types of insects and small objects composing the vast majority of smoking targets. Insect trapping on the ground was somewhat effective at sampling smaller insects around the tower, and presence and abundance of insects in the traps generally trended with radar and video observations. Traps did not tend to sample the larger insects we sometimes observed flying in the solar flux or found dead on the ground beneath the towers. Some of the methods we tested (e.g., video surveillance) could be further assessed and potentially used to automatically detect and observe flying animals in the vicinity of solar towers to advance understanding about their effects on wildlife.
Diehl, Robert H.; Valdez, Ernest W.; Preston, Todd M.; Wellik, Michael J.; Cryan, Paul M.
2016-01-01
Solar power towers produce electrical energy from sunlight at an industrial scale. Little is known about the effects of this technology on flying animals and few methods exist for automatically detecting or observing wildlife at solar towers and other tall anthropogenic structures. Smoking objects are sometimes observed co-occurring with reflected, concentrated light (“solar flux”) in the airspace around solar towers, but the identity and origins of such objects can be difficult to determine. In this observational pilot study at the world’s largest solar tower facility, we assessed the efficacy of using radar, surveillance video, and insect trapping to detect and observe animals flying near the towers. During site visits in May and September 2014, we monitored the airspace surrounding towers and observed insects, birds, and bats under a variety of environmental and operational conditions. We detected and broadly differentiated animals or objects moving through the airspace generally using radar and near solar towers using several video imaging methods. Video revealed what appeared to be mostly small insects burning in the solar flux. Also, we occasionally detected birds flying in the solar flux but could not accurately identify birds to species or the types of insects and small objects composing the vast majority of smoking targets. Insect trapping on the ground was somewhat effective at sampling smaller insects around the tower, and presence and abundance of insects in the traps generally trended with radar and video observations. Traps did not tend to sample the larger insects we sometimes observed flying in the solar flux or found dead on the ground beneath the towers. Some of the methods we tested (e.g., video surveillance) could be further assessed and potentially used to automatically detect and observe flying animals in the vicinity of solar towers to advance understanding about their effects on wildlife. PMID:27462989
Upside-down: Perceived space affects object-based attention.
Papenmeier, Frank; Meyerhoff, Hauke S; Brockhoff, Alisa; Jahn, Georg; Huff, Markus
2017-07-01
Object-based attention influences the subjective metrics of surrounding space. However, does perceived space influence object-based attention, as well? We used an attentive tracking task that required sustained object-based attention while objects moved within a tracking space. We manipulated perceived space through the availability of depth cues and varied the orientation of the tracking space. When rich depth cues were available (appearance of a voluminous tracking space), the upside-down orientation of the tracking space (objects appeared to move high on a ceiling) caused a pronounced impairment of tracking performance compared with an upright orientation of the tracking space (objects appeared to move on a floor plane). In contrast, this was not the case when reduced depth cues were available (appearance of a flat tracking space). With a preregistered second experiment, we showed that those effects were driven by scene-based depth cues and not object-based depth cues. We conclude that perceived space affects object-based attention and that object-based attention and perceived space are closely interlinked. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Schema generation in recurrent neural nets for intercepting a moving target.
Fleischer, Andreas G
2010-06-01
The grasping of a moving object requires the development of a motor strategy to anticipate the trajectory of the target and to compute an optimal course of interception. During the performance of perception-action cycles, a preprogrammed prototypical movement trajectory, a motor schema, may highly reduce the control load. Subjects were asked to hit a target that was moving along a circular path by means of a cursor. Randomized initial target positions and velocities were detected in the periphery of the eyes, resulting in a saccade toward the target. Even when the target disappeared, the eyes followed the target's anticipated course. The Gestalt of the trajectories was dependent on target velocity. The prediction capability of the motor schema was investigated by varying the visibility range of cursor and target. Motor schemata were determined to be of limited precision, and therefore visual feedback was continuously required to intercept the moving target. To intercept a target, the motor schema caused the hand to aim ahead and to adapt to the target trajectory. The control of cursor velocity determined the point of interception. From a modeling point of view, a neural network was developed that allowed the implementation of a motor schema interacting with feedback control in an iterative manner. The neural net of the Wilson type consists of an excitation-diffusion layer allowing the generation of a moving bubble. This activation bubble runs down an eye-centered motor schema and causes a planar arm model to move toward the target. A bubble provides local integration and straightening of the trajectory during repetitive moves. The schema adapts to task demands by learning and serves as forward controller. On the basis of these model considerations the principal problem of embedding motor schemata in generalized control strategies is discussed.
ERIC Educational Resources Information Center
Ferrara, Katrina; Hoffman, James E.; O'Hearn, Kirsten; Landau, Barbara
2016-01-01
The ability to track moving objects is a crucial skill for performance in everyday spatial tasks. The tracking mechanism depends on representation of moving items as coherent entities, which follow the spatiotemporal constraints of objects in the world. In the present experiment, participants tracked 1 to 4 targets in a display of 8 identical…
3D morphology reconstruction using linear array CCD binocular stereo vision imaging system
NASA Astrophysics Data System (ADS)
Pan, Yu; Wang, Jinjiang
2018-01-01
Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.
A Motion-Based Feature for Event-Based Pattern Recognition
Clady, Xavier; Maro, Jean-Matthieu; Barré, Sébastien; Benosman, Ryad B.
2017-01-01
This paper introduces an event-based luminance-free feature from the output of asynchronous event-based neuromorphic retinas. The feature consists in mapping the distribution of the optical flow along the contours of the moving objects in the visual scene into a matrix. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating “spiking” events that encode relative changes in pixels' illumination at high temporal resolutions. The optical flow is computed at each event, and is integrated locally or globally in a speed and direction coordinate frame based grid, using speed-tuned temporal kernels. The latter ensures that the resulting feature equitably represents the distribution of the normal motion along the current moving edges, whatever their respective dynamics. The usefulness and the generality of the proposed feature are demonstrated in pattern recognition applications: local corner detection and global gesture recognition. PMID:28101001
Pop-out in visual search of moving targets in the archer fish.
Ben-Tov, Mor; Donchin, Opher; Ben-Shahar, Ohad; Segev, Ronen
2015-03-10
Pop-out in visual search reflects the capacity of observers to rapidly detect visual targets independent of the number of distracting objects in the background. Although it may be beneficial to most animals, pop-out behaviour has been observed only in mammals, where neural correlates are found in primary visual cortex as contextually modulated neurons that encode aspects of saliency. Here we show that archer fish can also utilize this important search mechanism by exhibiting pop-out of moving targets. We explore neural correlates of this behaviour and report the presence of contextually modulated neurons in the optic tectum that may constitute the neural substrate for a saliency map. Furthermore, we find that both behaving fish and neural responses exhibit additive responses to multiple visual features. These findings suggest that similar neural computations underlie pop-out behaviour in mammals and fish, and that pop-out may be a universal search mechanism across all vertebrates.
NASA Technical Reports Server (NTRS)
Berthoz, A.; Pavard, B.; Young, L. R.
1975-01-01
The basic characteristics of the sensation of linear horizontal motion have been studied. Objective linear motion was induced by means of a moving cart. Visually induced linear motion perception (linearvection) was obtained by projection of moving images at the periphery of the visual field. Image velocity and luminance thresholds for the appearance of linearvection have been measured and are in the range of those for image motion detection (without sensation of self motion) by the visual system. Latencies of onset are around 1 sec and short term adaptation has been shown. The dynamic range of the visual analyzer as judged by frequency analysis is lower than the vestibular analyzer. Conflicting situations in which visual cues contradict vestibular and other proprioceptive cues show, in the case of linearvection a dominance of vision which supports the idea of an essential although not independent role of vision in self motion perception.
NASA Astrophysics Data System (ADS)
Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum
2017-04-01
We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.
NASA Astrophysics Data System (ADS)
Feliu-Talegon, D.; Feliu-Batlle, V.
2017-06-01
Flexible links combined with force and torque sensors can be used to detect obstacles in mobile robotics, as well as for surface and object recognition. These devices, called sensing antennae, perform an active sensing strategy in which a servomotor system moves the link back and forth until it hits an object. At this instant, information of the motor angles combined with force and torque measurements allow calculating the positions of the hitting points, which are valuable information about the object surface. In order to move the antenna fast and accurately, this article proposes a new closed-loop control for driving this flexible link-based sensor. The control strategy is based on combining a feedforward term and a feedback phase-lag compensator of fractional order. We demonstrate that some drawbacks of the control of these sensing devices like the apparition of spillover effects when a very fast positioning of the antenna tip is desired, and actuator saturation caused by high-frequency sensor noise, can be significantly reduced by using our newly proposed fractional-order controllers. We have applied these controllers to the position control of a prototype of sensing antenna and experiments have shown the improvements attained with this technique in the accurate and vibration free motion of its tip (the fractional-order controller reduced ten times the residual vibration obtained with the integer-order controller).
Radio observations of globulettes in the Carina nebula
NASA Astrophysics Data System (ADS)
Haikala, L. K.; Gahm, G. F.; Grenman, T.; Mäkelä, M. M.; Persson, C. M.
2017-06-01
Context. The Carina nebula hosts a large number of globulettes. An optical study of these tiny molecular clouds shows that the majority are of planetary mass, but there are also those with masses of several tens up to a few hundred Jupiter masses. Aims: We seek to search for, and hopefully detect, molecular line emission from some of the more massive objects; in case of successful detection we aim to map their motion in the Carina nebula complex and derive certain physical properties. Methods: We carried out radio observations of molecular line emission in 12CO and 13CO (2-1) and (3-2) of 12 globulettes in addition to positions in adjacent shell structures using APEX. Results: All selected objects were detected with radial velocities shifted relative to the emission from related shell structures and background molecular clouds. Globulettes along the western part of an extended dust shell show a small spread in velocity with small velocity shifts relative to the shell. This system of globulettes and shell structures in the foreground of the bright nebulosity surrounding the cluster Trumpler 14 is expanding with a few km s-1 relative to the cluster. A couple of isolated globulettes in the area move at similar speed. Compared to similar studies of the molecular line emission from globulettes in the Rosette nebula, we find that the integrated line intensity ratios and line widths are very different. The results show that the Carina objects have a different density/temperature structure than those in the Rosette nebula. In comparison the apparent size of the Carina globulettes is smaller, owing to the larger distance, and the corresponding beam filling factors are small. For this reason we were unable to carry out a more detailed modelling of the structure of the Carina objects in the way as performed for the Rosette objects. Conclusions: The Carina globulettes observed are compact and denser than objects of similar mass in the Rosette nebula. The distribution and velocities of these globulettes suggest that they have originated from eroding shells and elephant trunks. Some globulettes in the Trumpler 14 region are quite isolated and located far from any shell structures. These objects move at a similar speed as the globulettes along the shell, suggesting that they once formed from cloud fragments related to the same foreground shell. Based on observations collected with the Atacama Pathfinder Experiment (APEX), Llano Chajnantor, Chile (O-091.F-9316A and O-094.F-9312A).The final reduced radio data (FITS format) are available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/602/A61
Automated multiple target detection and tracking in UAV videos
NASA Astrophysics Data System (ADS)
Mao, Hongwei; Yang, Chenhui; Abousleman, Glen P.; Si, Jennie
2010-04-01
In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.
Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo
2015-09-18
A content-matched (CM) rangemonitoring query overmoving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CMrange monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods.
Tracking Algorithm of Multiple Pedestrians Based on Particle Filters in Video Sequences
Liu, Yun; Wang, Chuanxu; Zhang, Shujun; Cui, Xuehong
2016-01-01
Pedestrian tracking is a critical problem in the field of computer vision. Particle filters have been proven to be very useful in pedestrian tracking for nonlinear and non-Gaussian estimation problems. However, pedestrian tracking in complex environment is still facing many problems due to changes of pedestrian postures and scale, moving background, mutual occlusion, and presence of pedestrian. To surmount these difficulties, this paper presents tracking algorithm of multiple pedestrians based on particle filters in video sequences. The algorithm acquires confidence value of the object and the background through extracting a priori knowledge thus to achieve multipedestrian detection; it adopts color and texture features into particle filter to get better observation results and then automatically adjusts weight value of each feature according to current tracking environment. During the process of tracking, the algorithm processes severe occlusion condition to prevent drift and loss phenomena caused by object occlusion and associates detection results with particle state to propose discriminated method for object disappearance and emergence thus to achieve robust tracking of multiple pedestrians. Experimental verification and analysis in video sequences demonstrate that proposed algorithm improves the tracking performance and has better tracking results. PMID:27847514
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dmitriev, A K; Konovalov, A N; Ul'yanov, V A
2014-04-28
We report an experimental study of the self-mixing effect in a single-mode multifrequency erbium fibre laser when radiation backscattered from an external moving object arrives at its cavity. To eliminate resulting chaotic pulsations in the laser, we have proposed a technique for suppressing backscattered radiation through the use of multimode fibre for radiation delivery. The multifrequency operation of the laser has been shown to lead to strong fluctuations of the amplitude of the Doppler signal and a nonmonotonic variation of the amplitude with distance to the scattering object. In spite of these features, the self-mixing signal was detected with amore » high signal-to-noise ratio (above 10{sup 2}) when the radiation was scattered by a rotating disc, and the Doppler frequency shift, evaluated as the centroid of its spectrum, had high stability (0.15%) and linearity relative to the rotation rate. We conclude that the self-mixing effect in this type of fibre laser can be used for measuring the velocity of scattering objects and in Doppler spectroscopy for monitoring the laser evaporation of materials and biological tissues. (control of laser radiation parameters)« less
An optical search for small comets
NASA Astrophysics Data System (ADS)
Mutel, R. L.; Fix, J. D.
2000-11-01
We have conducted an extensive optical search for small comets with the characteristics proposed by Frank et al. [1986] and Frank and Sigwarth [1993, 1997]. The observations were made using the 0.5-m reflector of the Iowa Robotic Observatory between September 1998 and June 1999. The search technique consisted of tracking a fixed point in the ecliptic plane at +/-9° geocentric solar phase angle. The telescope scan rate was chosen to track objects moving prograde at 10 km s-1 relative to the Earth at a distance of 55,000 km. The camera was multiply shuttered to discriminate against trails caused by cosmic rays and sensor imperfections. Of 6143 total images, we selected 2713 which were suitable for detection of objects with a magnitude 16.5 or brighter with 120 pixel trails. The sensitivity and reliability of the visual detection scheme were determined by extensive double-blind tests using synthetic trails added to over 500 search images. After careful visual inspection of all images, we found no trials consistent with small comets. This result strongly disagrees with previous optical searches of Yeates [1989] and Frank et al. [1990], whose detection rates and magnitudes, when converted to the present search, predict 65+/-22 detections. We conclude that at 99% confidence, the number density of any prograde objects in the ecliptic plane brighter than magnitude 16.5 with speeds near 10 km s-1 have a number density less than 5% of the small-comet density derived by Frank et al. [1990]. Any object fainter than this magnitude limit with a mass corresponding to the small-comet hypothesis (M>20,000kg) must have either an implausibly low geometric albedo (p<0.01) or a density larger than that of water.
Bayes filter modification for drivability map estimation with observations from stereo vision
NASA Astrophysics Data System (ADS)
Panchenko, Aleksei; Prun, Viktor; Turchenkov, Dmitri
2017-02-01
Reconstruction of a drivability map for a moving vehicle is a well-known research topic in applied robotics. Here creating such a map for an autonomous truck on a generally planar surface containing separate obstacles is considered. The source of measurements for the truck is a calibrated pair of cameras. The stereo system detects and reconstructs several types of objects, such as road borders, other vehicles, pedestrians and general tall objects or highly saturated objects (e.g. road cone). For creating a robust mapping module we use a modification of Bayes filtering, which introduces some novel techniques for occupancy map update step. Specifically, our modified version becomes applicable to the presence of false positive measurement errors, stereo shading and obstacle occlusion. We implemented the technique and achieved real-time 15 FPS computations on an industrial shake proof PC. Our real world experiments show the positive effect of the filtering step.
Real time automated inspection
Fant, K.M.; Fundakowski, R.A.; Levitt, T.S.; Overland, J.E.; Suresh, B.R.; Ulrich, F.W.
1985-05-21
A method and apparatus are described relating to the real time automatic detection and classification of characteristic type surface imperfections occurring on the surfaces of material of interest such as moving hot metal slabs produced by a continuous steel caster. A data camera transversely scans continuous lines of such a surface to sense light intensities of scanned pixels and generates corresponding voltage values. The voltage values are converted to corresponding digital values to form a digital image of the surface which is subsequently processed to form an edge-enhanced image having scan lines characterized by intervals corresponding to the edges of the image. The edge-enhanced image is thresholded to segment out the edges and objects formed by the edges by interval matching and bin tracking. Features of the objects are derived and such features are utilized to classify the objects into characteristic type surface imperfections. 43 figs.
Perceptual integration of motion and form information: evidence of parallel-continuous processing.
von Mühlenen, A; Müller, H J
2000-04-01
In three visual search experiments, the processes involved in the efficient detection of motion-form conjunction targets were investigated. Experiment 1 was designed to estimate the relative contributions of stationary and moving nontargets to the search rate. Search rates were primarily determined by the number of moving nontargets; stationary nontargets sharing the target form also exerted a significant effect, but this was only about half as strong as that of moving nontargets; stationary nontargets not sharing the target form had little influence. In Experiments 2 and 3, the effects of display factors influencing the visual (form) quality of moving items (movement speed and item size) were examined. Increasing the speed of the moving items (> 1.5 degrees/sec) facilitated target detection when the task required segregation of the moving from the stationary items. When no segregation was necessary, increasing the movement speed impaired performance: With large display items, motion speed had little effect on target detection, but with small items, search efficiency declined when items moved faster than 1.5 degrees/sec. This pattern indicates that moving nontargets exert a strong effect on the search rate (Experiment 1) because of the loss of visual quality for moving items above a certain movement speed. A parallel-continuous processing account of motion-form conjunction search is proposed, which combines aspects of Guided Search (Wolfe, 1994) and attentional engagement theory (Duncan & Humphreys, 1989).
Using evolutionary computation to optimize an SVM used in detecting buried objects in FLIR imagery
NASA Astrophysics Data System (ADS)
Paino, Alex; Popescu, Mihail; Keller, James M.; Stone, Kevin
2013-06-01
In this paper we describe an approach for optimizing the parameters of a Support Vector Machine (SVM) as part of an algorithm used to detect buried objects in forward looking infrared (FLIR) imagery captured by a camera installed on a moving vehicle. The overall algorithm consists of a spot-finding procedure (to look for potential targets) followed by the extraction of several features from the neighborhood of each spot. The features include local binary pattern (LBP) and histogram of oriented gradients (HOG) as these are good at detecting texture classes. Finally, we project and sum each hit into UTM space along with its confidence value (obtained from the SVM), producing a confidence map for ROC analysis. In this work, we use an Evolutionary Computation Algorithm (ECA) to optimize various parameters involved in the system, such as the combination of features used, parameters on the Canny edge detector, the SVM kernel, and various HOG and LBP parameters. To validate our approach, we compare results obtained from an SVM using parameters obtained through our ECA technique with those previously selected by hand through several iterations of "guess and check".
Three-dimensional microscope tracking system using the astigmatic lens method and a profile sensor
NASA Astrophysics Data System (ADS)
Kibata, Hiroki; Ishii, Katsuhiro
2018-03-01
We developed a three-dimensional microscope tracking system using the astigmatic lens method and a profile sensor, which provides three-dimensional position detection over a wide range at the rate of 3.2 kHz. First, we confirmed the range of target detection of the developed system, where the range of target detection was shown to be ± 90 µm in the horizontal plane and ± 9 µm in the vertical plane for a 10× objective lens. Next, we attempted to track a motion-controlled target. The developed system kept the target at the center of the field of view and in focus up to a target speed of 50 µm/s for a 20× objective lens. Finally, we tracked a freely moving target. We successfully demonstrated the tracking of a 10-µm-diameter polystyrene bead suspended in water for 40 min. The target was kept in the range of approximately 4.9 µm around the center of the field of view. In addition, the vertical direction was maintained in the range of ± 0.84 µm, which was sufficiently within the depth of focus.
Wang, Su-hua; Baillargeon, Renée; Paterson, Sarah
2005-03-01
Recent research on infants' responses to occlusion and containment events indicates that, although some violations of the continuity principle are detected at an early age e.g. Aguiar, A., & Baillargeon, R. (1999). 2.5-month-old infants' reasoning about when objects should and should not be occluded. Cognitive Psychology 39, 116-157; Hespos, S. J., & Baillargeon, R. (2001). Knowledge about containment events in very young infants. Cognition 78, 207-245; Luo, Y., & Baillargeon, R. (in press). When the ordinary seems unexpected: Evidence for rule-based reasoning in young infants. Cognition; Wilcox, T., Nadel, L., & Rosser, R. (1996). Location memory in healthy preterm and full-term infants. Infant Behavior & Development 19, 309-323, others are not detected until much later e.g. Baillargeon, R., & DeVos, J. (1991). Object permanence in young infants: Further evidence. Child Development 62, 1227-1246; Hespos, S. J., & Baillargeon, R. (2001). Infants' knowledge about occlusion and containment events: A surprising discrepancy. Psychological Science 12, 140-147; Luo, Y., & Baillargeon, R. (2004). Infants' reasoning about events involving transparent occluders and containers. Manuscript in preparation; Wilcox, T. (1999). Object individuation: Infants' use of shape, size, pattern, and color. Cognition 72, 125-166. The present research focused on events involving covers or tubes, and brought to light additional examples of early and late successes in infants' ability to detect continuity violations. In Experiment 1, 2.5- to 3-month-old infants were surprised (1) when a cover was lowered over an object, slid to the right, and lifted to reveal no object; and (2) when a cover was lowered over an object, slid behind the left half of a screen, lifted above the screen, moved to the right, lowered behind the right half of the screen, slid past the screen, and finally lifted to reveal the object. In Experiments 2 and 3, 9- and 11-month-old infants were not surprised when a short cover was lowered over a tall object until it became fully hidden; only 12-month-old infants detected this violation. Finally, in Experiment 4, 9-, 12-, and 13-month-old infants were not surprised when a tall object was lowered inside a short tube until it became fully hidden; only 14-month-old infants detected this violation. A new account of infants' physical reasoning attempts to make sense of all of these results. New research directions suggested by the account are also discussed.
Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror
Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji
2017-01-01
This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385
Detection of dim targets in multiple environments
NASA Astrophysics Data System (ADS)
Mirsky, Grace M.; Woods, Matthew; Grasso, Robert J.
2013-10-01
The proliferation of a wide variety of weapons including Anti-Aircraft Artillery (AAA), rockets, and small arms presents a substantial threat to both military and civilian aircraft. To address this ever-present threat, Northrop Grumman has assessed unguided threat phenomenology to understand the underlying physical principles for detection. These principles, based upon threat transit through the atmosphere, exploit a simple phenomenon universal to all objects moving through an atmosphere comprised of gaseous media to detect and track the threat in the presence of background and clutter. Threat detection has rapidly become a crucial component of aircraft survivability systems that provide situational awareness to the crew. It is particularly important to platforms which may spend a majority of their time at low altitudes and within the effective range of a large variety of weapons. Detection of these threats presents a unique challenge as this class of threat typically has a dim signature coupled with a short duration. Correct identification of each of the threat components (muzzle flash and projectile) is important to determine trajectory and intent while minimizing false alarms and maintaining a high detection probability in all environments.
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
Optimal Attack Strategies Subject to Detection Constraints Against Cyber-Physical Systems
Chen, Yuan; Kar, Soummya; Moura, Jose M. F.
2017-03-31
This paper studies an attacker against a cyberphysical system (CPS) whose goal is to move the state of a CPS to a target state while ensuring that his or her probability of being detected does not exceed a given bound. The attacker’s probability of being detected is related to the nonnegative bias induced by his or her attack on the CPS’s detection statistic. We formulate a linear quadratic cost function that captures the attacker’s control goal and establish constraints on the induced bias that reflect the attacker’s detection-avoidance objectives. When the attacker is constrained to be detected at the false-alarmmore » rate of the detector, we show that the optimal attack strategy reduces to a linear feedback of the attacker’s state estimate. In the case that the attacker’s bias is upper bounded by a positive constant, we provide two algorithms – an optimal algorithm and a sub-optimal, less computationally intensive algorithm – to find suitable attack sequences. Lastly, we illustrate our attack strategies in numerical examples based on a remotely-controlled helicopter under attack.« less
Cancer diagnosis using a conventional x-ray fluorescence camera with a cadmium-telluride detector
NASA Astrophysics Data System (ADS)
Sato, Eiichi; Enomoto, Toshiyuki; Hagiwara, Osahiko; Abudurexiti, Abulajiang; Sato, Koetsu; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun
2011-10-01
X-ray fluorescence (XRF) analysis is useful for mapping various atoms in objects. Bremsstrahlung X-rays are selected using a 3.0 mm-thick aluminum filter, and these rays are absorbed by indium, cerium and gadolinium atoms in objects. Then XRF is produced from the objects, and photons are detected by a cadmium-telluride detector. The Kα photons are discriminated using a multichannel analyzer, and the number of photons is counted by a counter card. The objects are moved and scanned by an x-y stage in conjunction with a two-stage controller, and X-ray images obtained by atomic mapping are shown on a personal computer monitor. The scan steps of the x and y axes were both 2.5 mm, and the photon-counting time per mapping point was 0.5 s. We carried out atomic mapping using the X-ray camera, and Kα photons from cerium and gadolinium atoms were produced from cancerous regions in nude mice.
NASA Astrophysics Data System (ADS)
Enomoto, Toshiyuki; Sato, Eiichi; Abderyim, Purkhet; Abudurexiti, Abulajiang; Hagiwara, Osahiko; Matsukiyo, Hiroshi; Osawa, Akihiro; Watanabe, Manabu; Nagao, Jiro; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun
2011-04-01
X-ray fluorescence (XRF) analysis is useful for mapping various molecules in objects. Bremsstrahlung X-rays are selected using a 3.0-mm-thick aluminum filter, and these rays are absorbed by iodine, cerium, and gadolinium molecules in objects. Next, XRF is produced from the objects, and photons are detected by a cadmium-telluride detector. The Kα photons are discriminated using a multichannel analyzer, and the number of photons is counted by a counter card. The objects are moved and scanned by an x- y stage in conjunction with a two-stage controller, and X-ray images obtained by molecular mapping are shown on a personal computer monitor. The scan steps of x and y axes were both 2.5 mm, and the photon-counting time per mapping point was 0.5 s. We carried out molecular mapping using the X-ray camera, and Kα photons from cerium and gadolinium molecules were produced from cancerous regions in nude mice.
ERIC Educational Resources Information Center
Saneyoshi, Ayako; Michimata, Chikashi
2009-01-01
Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to…
Parallel Proximity Detection for Computer Simulation
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S. (Inventor); Wieland, Frederick P. (Inventor)
1997-01-01
The present invention discloses a system for performing proximity detection in computer simulations on parallel processing architectures utilizing a distribution list which includes movers and sensor coverages which check in and out of grids. Each mover maintains a list of sensors that detect the mover's motion as the mover and sensor coverages check in and out of the grids. Fuzzy grids are includes by fuzzy resolution parameters to allow movers and sensor coverages to check in and out of grids without computing exact grid crossings. The movers check in and out of grids while moving sensors periodically inform the grids of their coverage. In addition, a lookahead function is also included for providing a generalized capability without making any limiting assumptions about the particular application to which it is applied. The lookahead function is initiated so that risk-free synchronization strategies never roll back grid events. The lookahead function adds fixed delays as events are scheduled for objects on other nodes.
Parallel Proximity Detection for Computer Simulations
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S. (Inventor); Wieland, Frederick P. (Inventor)
1998-01-01
The present invention discloses a system for performing proximity detection in computer simulations on parallel processing architectures utilizing a distribution list which includes movers and sensor coverages which check in and out of grids. Each mover maintains a list of sensors that detect the mover's motion as the mover and sensor coverages check in and out of the grids. Fuzzy grids are included by fuzzy resolution parameters to allow movers and sensor coverages to check in and out of grids without computing exact grid crossings. The movers check in and out of grids while moving sensors periodically inform the grids of their coverage. In addition, a lookahead function is also included for providing a generalized capability without making any limiting assumptions about the particular application to which it is applied. The lookahead function is initiated so that risk-free synchronization strategies never roll back grid events. The lookahead function adds fixed delays as events are scheduled for objects on other nodes.
Moving target parameter estimation of SAR after two looks cancellation
NASA Astrophysics Data System (ADS)
Gan, Rongbing; Wang, Jianguo; Gao, Xiang
2005-11-01
Moving target detection of synthetic aperture radar (SAR) by two looks cancellation is studied. First, two looks are got by the first and second half of the synthetic aperture. After two looks cancellation, the moving targets are reserved and stationary targets are removed. After that, a Constant False Alarm Rate (CFAR) detector detects moving targets. The ground range velocity and cross-range velocity of moving target can be got by the position shift between the two looks. We developed a method to estimate the cross-range shift due to slant range moving. we estimate cross-range shift by Doppler frequency center. Wigner-Ville Distribution (WVD) is used to estimate the Doppler frequency center (DFC). Because the range position and cross range before correction is known, estimation of DFC is much easier and efficient. Finally experiments results show that our algorithms have good performance. With the algorithms we can estimate the moving target parameter accurately.
Saetta, Gianluca; Grond, Ilva; Brugger, Peter; Lenggenhager, Bigna; Tsay, Anthony J; Giummarra, Melita J
2018-03-21
Phantom limbs are the phenomenal persistence of postural and sensorimotor features of an amputated limb. Although immaterial, their characteristics can be modulated by the presence of physical matter. For instance, the phantom may disappear when its phenomenal space is invaded by objects ("obstacle shunning"). Alternatively, "obstacle tolerance" occurs when the phantom is not limited by the law of impenetrability and co-exists with physical objects. Here we examined the link between this under-investigated aspect of phantom limbs and apparent motion perception. The illusion of apparent motion of human limbs involves the perception that a limb moves through or around an object, depending on the stimulus onset asynchrony (SOA) for the two images. Participants included 12 unilateral lower limb amputees matched for obstacle shunning (n = 6) and obstacle tolerance (n = 6) experiences, and 14 non-amputees. Using multilevel linear models, we replicated robust biases for short perceived trajectories for short SOA (moving through the object), and long trajectories (circumventing the object) for long SOAs in both groups. Importantly, however, amputees with obstacle shunning perceived leg stimuli to predominantly move through the object, whereas amputees with obstacle tolerance perceived leg stimuli to predominantly move around the object. That is, in people who experience obstacle shunning, apparent motion perception of lower limbs was not constrained to the laws of impenetrability (as the phantom disappears when invaded by objects), and legs can therefore move through physical objects. Amputees who experience obstacle tolerance, however, had stronger solidity constraints for lower limb apparent motion, perhaps because they must avoid co-location of the phantom with physical objects. Phantom limb experience does, therefore, appear to be modulated by intuitive physics, but not in the same way for everyone. This may have important implications for limb experience post-amputation (e.g., improving prosthesis embodiment when limb representation is constrained by the same limits as an intact limb). Copyright © 2018 Elsevier Ltd. All rights reserved.
The Impact Imperative: A Space Infrastructure Enabling a Multi-Tiered Earth Defense
NASA Technical Reports Server (NTRS)
Campbell, Jonathan W.; Phipps, Claude; Smalley, Larry; Reilly, James; Boccio, Dona
2003-01-01
Impacting at hypervelocity, an asteroid struck the Earth approximately 65 million years ago in the Yucatan Peninsula a m . This triggered the extinction of almost 70% of the species of life on Earth including the dinosaurs. Other impacts prior to this one have caused even greater extinctions. Preventing collisions with the Earth by hypervelocity asteroids, meteoroids, and comets is the most important immediate space challenge facing human civilization. This is the Impact Imperative. We now believe that while there are about 2000 earth orbit crossing rocks greater than 1 kilometer in diameter, there may be as many as 200,000 or more objects in the 100 m size range. Can anything be done about this fundamental existence question facing our civilization? The answer is a resounding yes! By using an intelligent combination of Earth and space based sensors coupled with an infrastructure of high-energy laser stations and other secondary mitigation options, we can deflect inbound asteroids, meteoroids, and comets and prevent them &om striking the Earth. This can be accomplished by irradiating the surface of an inbound rock with sufficiently intense pulses so that ablation occurs. This ablation acts as a small rocket incrementally changing the shape of the rock's orbit around the Sun. One-kilometer size rocks can be moved sufficiently in about a month while smaller rocks may be moved in a shorter time span. We recommend that space objectives be immediately reprioritized to start us moving quickly towards an infrastructure that will support a multiple option defense capability. Planning and development for a lunar laser facility should be initiated immediately in parallel with other options. All mitigation options are greatly enhanced by robust early warning, detection, and tracking resources to find objects sufficiently prior to Earth orbit passage in time to allow significant intervention. Infrastructure options should include ground, LEO, GEO, Lunar, and libration point laser and sensor stations for providing early warning, tracking, and deflection. Other options should include space interceptors that will carry both laser and nuclear ablators for close range work. Response options must be developed to deal with the consequences of an impact should we move too slowly.
Dissipation function and adaptive gradient reconstruction based smoke detection in video
NASA Astrophysics Data System (ADS)
Li, Bin; Zhang, Qiang; Shi, Chunlei
2017-11-01
A method for smoke detection in video is proposed. The camera monitoring the scene is assumed to be stationary. With the atmospheric scattering model, dissipation function is reflected transmissivity between the background objects in the scene and the camera. Dark channel prior and fast bilateral filter are used for estimating dissipation function which is only the function of the depth of field. Based on dissipation function, visual background extractor (ViBe) can be used for detecting smoke as a result of smoke's motion characteristics as well as detecting other moving targets. Since smoke has semi-transparent parts, the things which are covered by these parts can be recovered by poisson equation adaptively. The similarity between the recovered parts and the original background parts in the same position is calculated by Normalized Cross Correlation (NCC) and the original background's value is selected from the frame which is nearest to the current frame. The parts with high similarity are considered as smoke parts.
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
1993-01-01
A Linear Motion Encoding device for measuring the linear motion of a moving object is disclosed in which a light source is mounted on the moving object and a position sensitive detector such as an array photodetector is mounted on a nearby stationary object. The light source emits a light beam directed towards the array photodetector such that a light spot is created on the array. An analog-to-digital converter, connected to the array photodetector is used for reading the position of the spot on the array photodetector. A microprocessor and memory is connected to the analog-to-digital converter to hold and manipulate data provided by the analog-to-digital converter on the position of the spot and to compute the linear displacement of the moving object based upon the data from the analog-to-digital converter.
How Many Objects are You Worth? Quantification of the Self-Motion Load on Multiple Object Tracking
Thomas, Laura E.; Seiffert, Adriane E.
2011-01-01
Perhaps walking and chewing gum is effortless, but walking and tracking moving objects is not. Multiple object tracking is impaired by walking from one location to another, suggesting that updating location of the self puts demands on object tracking processes. Here, we quantified the cost of self-motion in terms of the tracking load. Participants in a virtual environment tracked a variable number of targets (1–5) among distractors while either staying in one place or moving along a path that was similar to the objects’ motion. At the end of each trial, participants decided whether a probed dot was a target or distractor. As in our previous work, self-motion significantly impaired performance in tracking multiple targets. Quantifying tracking capacity for each individual under move versus stay conditions further revealed that self-motion during tracking produced a cost to capacity of about 0.8 (±0.2) objects. Tracking your own motion is worth about one object, suggesting that updating the location of the self is similar, but perhaps slightly easier, than updating locations of objects. PMID:21991259
Clustering analysis of moving target signatures
NASA Astrophysics Data System (ADS)
Martone, Anthony; Ranney, Kenneth; Innocenti, Roberto
2010-04-01
Previously, we developed a moving target indication (MTI) processing approach to detect and track slow-moving targets inside buildings, which successfully detected moving targets (MTs) from data collected by a low-frequency, ultra-wideband radar. Our MTI algorithms include change detection, automatic target detection (ATD), clustering, and tracking. The MTI algorithms can be implemented in a real-time or near-real-time system; however, a person-in-the-loop is needed to select input parameters for the clustering algorithm. Specifically, the number of clusters to input into the cluster algorithm is unknown and requires manual selection. A critical need exists to automate all aspects of the MTI processing formulation. In this paper, we investigate two techniques that automatically determine the number of clusters: the adaptive knee-point (KP) algorithm and the recursive pixel finding (RPF) algorithm. The KP algorithm is based on a well-known heuristic approach for determining the number of clusters. The RPF algorithm is analogous to the image processing, pixel labeling procedure. Both algorithms are used to analyze the false alarm and detection rates of three operational scenarios of personnel walking inside wood and cinderblock buildings.
ERIC Educational Resources Information Center
Preston, Christine
2018-01-01
If you think physics is only for older children, think again. Much of the playtime of young children is filled with exploring--and wondering about and informally investigating--the way objects, especially toys, move. How forces affect objects, including: change in position, motion, and shape are fundamental to the big ideas in physics. This…
ERIC Educational Resources Information Center
Trundle, Kathy Cabe; Smith, Mandy McCormick
2011-01-01
Some of children's earliest explorations focus on movement of their own bodies. Quickly, children learn to further explore movement by using objects like a ball or car. They recognize that a ball moves differently than a pushed block. As they grow, children enjoy their experiences with motion and movement, including making objects move, changing…
An elementary research on wireless transmission of holographic 3D moving pictures
NASA Astrophysics Data System (ADS)
Takano, Kunihiko; Sato, Koki; Endo, Takaya; Asano, Hiroaki; Fukuzawa, Atsuo; Asai, Kikuo
2009-05-01
In this paper, a transmitting process of a sequence of holograms describing 3D moving objects over the communicating wireless-network system is presented. A sequence of holograms involves holograms is transformed into a bit stream data, and then it is transmitted over the wireless LAN and Bluetooth. It is shown that applying this technique, holographic data of 3D moving object is transmitted in high quality and a relatively good reconstruction of holographic images is performed.
ATLAS: Finding the Nearest Asteroids
NASA Astrophysics Data System (ADS)
Heinze, Aren; Tonry, John L.; Denneau, Larry; Stalder, Brian
2017-10-01
The Asteroid Terrestrial-impact Last Alert System (ATLAS) became fully operational in June 2017. Our two robotic, 0.5 meter telescopes survey the whole accessible sky every two nights from the Hawaiian mountains of Haleakala and Mauna Loa. With sensitivity to magnitude 19.5 over a field of 30 square degrees, we discover several bright near-Earth objects every month - particularly fast moving asteroids, which can slip by other surveys that scan the sky more slowly. Several important developments in 2017 have enhanced our sensitivity to small, nearby asteroids and potential impactors. We report on these developments - including optical adjustments, automated screening of detections, closer temporal spacing of images, and tolerance for large deviations from Great Circle motion on the sky - and we describe their effect in terms of measuring and discovering real objects.
Perceived shifts of flashed stimuli by visible and invisible object motion.
Watanabe, Katsumi; Sato, Takashi R; Shimojo, Shinsuke
2003-01-01
Perceived positions of flashed stimuli can be altered by motion signals in the visual field-position capture (Whitney and Cavanagh, 2000 Nature Neuroscience 3 954-959). We examined whether position capture of flashed stimuli depends on the spatial relationship between moving and flashed stimuli, and whether the phenomenal permanence of a moving object behind an occluding surface (tunnel effect; Michotte 1950 Acta Psychologica 7 293-322) can produce position capture. Observers saw two objects (circles) moving vertically in opposite directions, one in each visual hemifield. Two horizontal bars were simultaneously flashed at horizontally collinear positions with the fixation point at various timings. When the movement of the object was fully visible, the flashed bar appeared shifted in the motion direction of the circle. But this position-capture effect occurred only when the bar was presented ahead of or on the moving circle. Even when the motion trajectory was covered by an opaque surface and the bar was flashed after complete occlusion of the circle, the position-capture effect was still observed, though the positional asymmetry was less clear. These results show that movements of both visible and 'hidden' objects can modulate the perception of positions of flashed stimuli and suggest that a high-level representation of 'objects in motion' plays an important role in the position-capture effect.
NASA Astrophysics Data System (ADS)
Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.
2013-09-01
We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.
Egelhaaf, Martin; Kern, Roland; Lindemann, Jens Peter
2014-01-01
Despite their miniature brains insects, such as flies, bees and wasps, are able to navigate by highly erobatic flight maneuvers in cluttered environments. They rely on spatial information that is contained in the retinal motion patterns induced on the eyes while moving around (“optic flow”) to accomplish their extraordinary performance. Thereby, they employ an active flight and gaze strategy that separates rapid saccade-like turns from translatory flight phases where the gaze direction is kept largely constant. This behavioral strategy facilitates the processing of environmental information, because information about the distance of the animal to objects in the environment is only contained in the optic flow generated by translatory motion. However, motion detectors as are widespread in biological systems do not represent veridically the velocity of the optic flow vectors, but also reflect textural information about the environment. This characteristic has often been regarded as a limitation of a biological motion detection mechanism. In contrast, we conclude from analyses challenging insect movement detectors with image flow as generated during translatory locomotion through cluttered natural environments that this mechanism represents the contours of nearby objects. Contrast borders are a main carrier of functionally relevant object information in artificial and natural sceneries. The motion detection system thus segregates in a computationally parsimonious way the environment into behaviorally relevant nearby objects and—in many behavioral contexts—less relevant distant structures. Hence, by making use of an active flight and gaze strategy, insects are capable of performing extraordinarily well even with a computationally simple motion detection mechanism. PMID:25389392
Egelhaaf, Martin; Kern, Roland; Lindemann, Jens Peter
2014-01-01
Despite their miniature brains insects, such as flies, bees and wasps, are able to navigate by highly erobatic flight maneuvers in cluttered environments. They rely on spatial information that is contained in the retinal motion patterns induced on the eyes while moving around ("optic flow") to accomplish their extraordinary performance. Thereby, they employ an active flight and gaze strategy that separates rapid saccade-like turns from translatory flight phases where the gaze direction is kept largely constant. This behavioral strategy facilitates the processing of environmental information, because information about the distance of the animal to objects in the environment is only contained in the optic flow generated by translatory motion. However, motion detectors as are widespread in biological systems do not represent veridically the velocity of the optic flow vectors, but also reflect textural information about the environment. This characteristic has often been regarded as a limitation of a biological motion detection mechanism. In contrast, we conclude from analyses challenging insect movement detectors with image flow as generated during translatory locomotion through cluttered natural environments that this mechanism represents the contours of nearby objects. Contrast borders are a main carrier of functionally relevant object information in artificial and natural sceneries. The motion detection system thus segregates in a computationally parsimonious way the environment into behaviorally relevant nearby objects and-in many behavioral contexts-less relevant distant structures. Hence, by making use of an active flight and gaze strategy, insects are capable of performing extraordinarily well even with a computationally simple motion detection mechanism.
The Rapidly Moving Telescope: an Instrument for the Precise Study of Optical Transients
NASA Technical Reports Server (NTRS)
Teegarden, B. J.; Vonrosenvinge, T. T.; Cline, T. L.; Kaipa, R.
1983-01-01
The development of a small telescope with a very rapid pointing capability is described whose purpose is to search for and study fast optical transients that may be associated with gamma-ray bursts and other phenomena. The primary motivation for this search is the discovery of the existence of a transient optical event from the known location of a gamma-ray bursts. The telescope has the capability of rapidly acquiring any target in the night sky within 0.7 second and locating the object's position with + or - 1 arcsec accuracy. The initial detection of the event is accomplished by the MIT explosive transient camera or ETC. This provides rough pointing coordinates to the RMT on the average within approximately 1 second after the detection of the event.
Incidents Prediction in Road Junctions Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Hajji, Tarik; Alami Hassani, Aicha; Ouazzani Jamil, Mohammed
2018-05-01
The implementation of an incident detection system (IDS) is an indispensable operation in the analysis of the road traffics. However the IDS may, in no case, represent an alternative to the classical monitoring system controlled by the human eye. The aim of this work is to increase detection and prediction probability of incidents in camera-monitored areas. Knowing that, these areas are monitored by multiple cameras and few supervisors. Our solution is to use Artificial Neural Networks (ANN) to analyze moving objects trajectories on captured images. We first propose a modelling of the trajectories and their characteristics, after we develop a learning database for valid and invalid trajectories, and then we carry out a comparative study to find the artificial neural network architecture that maximizes the rate of valid and invalid trajectories recognition.
CoLiTec software - detection of the near-zero apparent motion
NASA Astrophysics Data System (ADS)
Khlamov, Sergii V.; Savanevych, Vadym E.; Briukhovetskyi, Olexandr B.; Pohorelov, Artem V.
2017-06-01
In this article we described CoLiTec software for full automated frames processing. CoLiTec software allows processing the Big Data of observation results as well as processing of data that is continuously formed during observation. The scope of solving tasks includes frames brightness equalization, moving objects detection, astrometry, photometry, etc. Along with the high efficiency of Big Data processing CoLiTec software also ensures high accuracy of data measurements. A comparative analysis of the functional characteristics and positional accuracy was performed between CoLiTec and Astrometrica software. The benefits of CoLiTec used with wide field and low quality frames were observed. The efficiency of the CoLiTec software was proved by about 700.000 observations and over 1.500 preliminary discoveries.
Bachmann, Talis; Murd, Carolina; Põder, Endel
2012-09-01
One fundamental property of the perceptual and cognitive systems is their capacity for prediction in the dynamic environment; the flash-lag effect has been considered as a particularly suggestive example of this capacity (Nijhawan in nature 370:256-257, 1994, Behav brain sci 31:179-239, 2008). Thus, because of involvement of the mechanisms of extrapolation and visual prediction, the moving object is perceived ahead of the simultaneously flashed static object objectively aligned with the moving one. In the present study we introduce a new method and report experimental results inconsistent with at least some versions of the prediction/extrapolation theory. We show that a stimulus moving in the opposite direction to the reference stimulus by approaching it before the flash does not diminish the flash-lag effect, but rather augments it. In addition, alternative theories (in)capable of explaining this paradoxical result are discussed.
Alternatives to the Moving Average
Paul C. van Deusen
2001-01-01
There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...
Optimal Path Determination for Flying Vehicle to Search an Object
NASA Astrophysics Data System (ADS)
Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.
2018-01-01
In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.
Exploring Dance Movement Data Using Sequence Alignment Methods
Chavoshi, Seyed Hossein; De Baets, Bernard; Neutens, Tijs; De Tré, Guy; Van de Weghe, Nico
2015-01-01
Despite the abundance of research on knowledge discovery from moving object databases, only a limited number of studies have examined the interaction between moving point objects in space over time. This paper describes a novel approach for measuring similarity in the interaction between moving objects. The proposed approach consists of three steps. First, we transform movement data into sequences of successive qualitative relations based on the Qualitative Trajectory Calculus (QTC). Second, sequence alignment methods are applied to measure the similarity between movement sequences. Finally, movement sequences are grouped based on similarity by means of an agglomerative hierarchical clustering method. The applicability of this approach is tested using movement data from samba and tango dancers. PMID:26181435
Wireless inertial measurement of head kinematics in freely-moving rats
Pasquet, Matthieu O.; Tihy, Matthieu; Gourgeon, Aurélie; Pompili, Marco N.; Godsil, Bill P.; Léna, Clément; Dugué, Guillaume P.
2016-01-01
While miniature inertial sensors offer a promising means for precisely detecting, quantifying and classifying animal behaviors, versatile inertial sensing devices adapted for small, freely-moving laboratory animals are still lacking. We developed a standalone and cost-effective platform for performing high-rate wireless inertial measurements of head movements in rats. Our system is designed to enable real-time bidirectional communication between the headborne inertial sensing device and third party systems, which can be used for precise data timestamping and low-latency motion-triggered applications. We illustrate the usefulness of our system in diverse experimental situations. We show that our system can be used for precisely quantifying motor responses evoked by external stimuli, for characterizing head kinematics during normal behavior and for monitoring head posture under normal and pathological conditions obtained using unilateral vestibular lesions. We also introduce and validate a novel method for automatically quantifying behavioral freezing during Pavlovian fear conditioning experiments, which offers superior performance in terms of precision, temporal resolution and efficiency. Thus, this system precisely acquires movement information in freely-moving animals, and can enable objective and quantitative behavioral scoring methods in a wide variety of experimental situations. PMID:27767085
Evidence against a speed limit in multiple-object tracking.
Franconeri, S L; Lin, J Y; Pylyshyn, Z W; Fisher, B; Enns, J T
2008-08-01
Everyday tasks often require us to keep track of multiple objects in dynamic scenes. Past studies show that tracking becomes more difficult as objects move faster. In the present study, we show that this trade-off may not be due to increased speed itself but may, instead, be due to the increased crowding that usually accompanies increases in speed. Here, we isolate changes in speed from variations in crowding, by projecting a tracking display either onto a small area at the center of a hemispheric projection dome or onto the entire dome. Use of the larger display increased retinal image size and object speed by a factor of 4 but did not increase interobject crowding. Results showed that tracking accuracy was equally good in the large-display condition, even when the objects traveled far into the visual periphery. Accuracy was also not reduced when we tested object speeds that limited performance in the small-display condition. These results, along with a reinterpretation of past studies, suggest that we might be able to track multiple moving objects as fast as we can a single moving object, once the effect of object crowding is eliminated.
Reinhardt-Rutland, A H
2003-07-01
Induced motion is the illusory motion of a static stimulus in the opposite direction to a moving stimulus. Two types of induced motion have been distinguished: (a) when the moving stimulus is distant from the static stimulus and undergoes overall displacement, and (b) when the moving stimulus is pattern viewed within fixed boundaries that abut the static stimulus. Explanations of the 1st type of induced motion refer to mediating phenomena, such as vection, whereas the 2nd type is attributed to local processing by motion-sensitive neurons. The present research was directed to a display that elicited induced rotational motion with the characteristics of both types of induced motion: the moving stimulus lay within fixed boundaries, but the inducing and induced stimuli were distant from each other. The author investigated the properties that distinguished the two types of induced motion. In 3 experiments, induced motion persisted indefinitely, interocular transfer of the aftereffect of induced motion was limited to about 20%, and the time-course of the aftereffect of induced motion could not be attributed to vection. Those results were consistent with fixed-boundary induced motion. However, they could not be explained by local processing. Instead, the results might reflect the detection of object motion within a complex flow-field that resulted from the observer's motion.
ERIC Educational Resources Information Center
McCloskey, Michael; And Others
Through everyday experience people acquire knowledge about how moving objects behave. For example, if a rock is thrown up into the air, it will fall back to earth. Research has shown that people's ideas about why moving objects behave as they do are often quite inconsistent with the principles of classical mechanics. In fact, many people hold a…
ERIC Educational Resources Information Center
Hecht, Eugene
2015-01-01
Anyone who has taught introductory physics should know that roughly a third of the students initially believe that any object at rest will remain at rest, whereas any moving body not propelled by applied forces will promptly come to rest. Likewise, about half of those uninitiated students believe that any object moving at a constant speed must be…
Role of condenser iris in optical tweezer detection system.
Samadi, Akbar; Reihani, S Nader S
2011-10-15
Optical tweezers have proven to be very useful in various scientific fields, from biology to nanotechnology. In this Letter we show, both by theory and experiment, that the interference intensity pattern at the back focal plane of the condenser consists of two distinguishable areas with anticorrelated intensity changes when the bead is moved in the axial direction. We show that the space angle defining the border of two areas linearly depends on the NA of the objective. We also propose a new octant photodiode, which could significantly improve the axial resolution compared to the commonly used quadrant photodiode technique.
Radiation detector having a multiplicity of individual detecting elements
Whetten, Nathan R.; Kelley, John E.
1985-01-01
A radiation detector has a plurality of detector collection element arrays immersed in a radiation-to-electron conversion medium. Each array contains a multiplicity of coplanar detector elements radially disposed with respect to one of a plurality of positions which at least one radiation source can assume. Each detector collector array is utilized only when a source is operative at the associated source position, negating the necessity for a multi-element detector to be moved with respect to an object to be examined. A novel housing provides the required containment of a high-pressure gas conversion medium.
Needham, Amy; Cantlon, Jessica F; Ormsbee Holley, Susan M
2006-12-01
The current research investigates infants' perception of a novel object from a category that is familiar to young infants: key rings. We ask whether experiences obtained outside the lab would allow young infants to parse the visible portions of a partly occluded key ring display into one single unit, presumably as a result of having categorized it as a key ring. This categorization was marked by infants' perception of the keys and ring as a single unit that should move together, despite their attribute differences. We showed infants a novel key ring display in which the keys and ring moved together as one rigid unit (Move-together event) or the ring moved but the keys remained stationary throughout the event (Move-apart event). Our results showed that 8.5-month-old infants perceived the keys and ring as connected despite their attribute differences, and that their perception of object unity was eliminated as the distinctive attributes of the key ring were removed. When all of the distinctive attributes of the key ring were removed, the 8.5-month-old infants perceived the display as two separate units, which is how younger infants (7-month-old) perceived the key ring display with all its distinctive attributes unaltered. These results suggest that on the basis of extensive experience with an object category, infants come to identify novel members of that category and expect them to possess the attributes typical of that category.
NASA Astrophysics Data System (ADS)
Aulenbacher, Uwe; Rech, Klaus; Sedlmeier, Johannes; Pratisto, Hans; Wellig, Peter
2014-10-01
Ground based millimeter wave radar sensors offer the potential for a weather-independent automatic ground surveillance at day and night, e.g. for camp protection applications. The basic principle and the experimental verification of a radar system concept is described, which by means of an extreme off-axis positioning of the antenna(s) combines azimuthal mechanical beam steering with the formation of a circular-arc shaped synthetic aperture (SA). In automatic ground surveillance the function of search and detection of moving ground targets is performed by means of the conventional mechanical scan mode. The rotated antenna structure designed as a small array with two or more RX antenna elements with simultaneous receiver chains allows to instantaneous track multiple moving targets (monopulse principle). The simultaneously operated SAR mode yields areal images of the distribution of stationary scatterers. For ground surveillance application this SAR mode is best suited for identifying possible threats by means of change detection. The feasibility of this concept was tested by means of an experimental radar system comprising of a 94 GHz (W band) FM-CW module with 1 GHz bandwidth and two RX antennas with parallel receiver channels, placed off-axis at a rotating platform. SAR mode and search/track mode were tested during an outdoor measurement campaign. The scenery of two persons walking along a road and partially through forest served as test for the capability to track multiple moving targets. For SAR mode verification an image of the area composed of roads, grassland, woodland and several man-made objects was reconstructed from the measured data.
Proactive Alleviation Procedure to Handle Black Hole Attack and Its Version
Babu, M. Rajesh; Dian, S. Moses; Chelladurai, Siva; Palaniappan, Mathiyalagan
2015-01-01
The world is moving towards a new realm of computing such as Internet of Things. The Internet of Things, however, envisions connecting almost all objects within the world to the Internet by recognizing them as smart objects. In doing so, the existing networks which include wired, wireless, and ad hoc networks should be utilized. Moreover, apart from other networks, the ad hoc network is full of security challenges. For instance, the MANET (mobile ad hoc network) is susceptible to various attacks in which the black hole attacks and its versions do serious damage to the entire MANET infrastructure. The severity of this attack increases, when the compromised MANET nodes work in cooperation with each other to make a cooperative black hole attack. Therefore this paper proposes an alleviation procedure which consists of timely mandate procedure, hole detection algorithm, and sensitive guard procedure to detect the maliciously behaving nodes. It has been observed that the proposed procedure is cost-effective and ensures QoS guarantee by assuring resource availability thus making the MANET appropriate for Internet of Things. PMID:26495430
Visual acuity of the honey bee retina and the limits for feature detection.
Rigosi, Elisa; Wiederman, Steven D; O'Carroll, David C
2017-04-06
Visual abilities of the honey bee have been studied for more than 100 years, recently revealing unexpectedly sophisticated cognitive skills rivalling those of vertebrates. However, the physiological limits of the honey bee eye have been largely unaddressed and only studied in an unnatural, dark state. Using a bright display and intracellular recordings, we here systematically investigated the angular sensitivity across the light adapted eye of honey bee foragers. Angular sensitivity is a measure of photoreceptor receptive field size and thus small values indicate higher visual acuity. Our recordings reveal a fronto-ventral acute zone in which angular sensitivity falls below 1.9°, some 30% smaller than previously reported. By measuring receptor noise and responses to moving dark objects, we also obtained direct measures of the smallest features detectable by the retina. In the frontal eye, single photoreceptors respond to objects as small as 0.6° × 0.6°, with >99% reliability. This indicates that honey bee foragers possess significantly better resolution than previously reported or estimated behaviourally, and commonly assumed in modelling of bee acuity.
Proactive Alleviation Procedure to Handle Black Hole Attack and Its Version.
Babu, M Rajesh; Dian, S Moses; Chelladurai, Siva; Palaniappan, Mathiyalagan
2015-01-01
The world is moving towards a new realm of computing such as Internet of Things. The Internet of Things, however, envisions connecting almost all objects within the world to the Internet by recognizing them as smart objects. In doing so, the existing networks which include wired, wireless, and ad hoc networks should be utilized. Moreover, apart from other networks, the ad hoc network is full of security challenges. For instance, the MANET (mobile ad hoc network) is susceptible to various attacks in which the black hole attacks and its versions do serious damage to the entire MANET infrastructure. The severity of this attack increases, when the compromised MANET nodes work in cooperation with each other to make a cooperative black hole attack. Therefore this paper proposes an alleviation procedure which consists of timely mandate procedure, hole detection algorithm, and sensitive guard procedure to detect the maliciously behaving nodes. It has been observed that the proposed procedure is cost-effective and ensures QoS guarantee by assuring resource availability thus making the MANET appropriate for Internet of Things.
Superluminal Motion Found In Milky Way
NASA Astrophysics Data System (ADS)
1994-08-01
Researchers using the Very Large Array (VLA) have discovered that a small, powerful object in our own cosmic neighborhood is shooting out material at nearly the speed of light -- a feat previously known to be performed only by the massive cores of entire galaxies. In fact, because of the direction in which the material is moving, it appears to be traveling faster than the speed of light -- a phenomenon called "superluminal motion." This is the first superluminal motion ever detected within our Galaxy. During March and April of this year, Dr. Felix Mirabel of the Astrophysics Section of the Center for Studies at Saclay, France, and Dr. Luis Rodriguez of the Institute of Astronomy at the National Autonomous University in Mexico City and NRAO, observed "a remarkable ejection event" in which the object shot out material in opposite directions at 92 percent of the speed of light, or more than 171,000 miles per second. This event ejected a mass equal to one-third that of the moon with the power of 100 million suns. Such powerful ejections are well known in distant galaxies and quasars, millions and billions of light-years away, but the object Mirabel and Rodriguez observed is within our own Milky Way Galaxy, only 40,000 light-years away. The object also is much smaller and less massive than the core of a galaxy, so the scientists were quite surprised to find it capable of accelerating material to such speeds. Mirabel and Rodriguez believe that the object is likely a double-star system, with one of the stars either an extremely dense neutron star or a black hole. The neutron star or black hole is the central object of the system, with great mass and strong gravitational pull. It is surrounded by a disk of material orbiting closely and being drawn into it. Such a disk is known as an accretion disk. The central object's powerful gravity, they believe, is pulling material from a more-normal companion star into the accretion disk. The central object is emitting jets of subatomic particles from its poles, and it is in these jets that the rapidly-moving material was tracked. The object, known as GRS 1915+105, also is a strong emitter of X-Rays, sometimes becoming the strongest source of X-Rays in the Milky Way. The X-rays, they think, are emitted from the system's accretion disk. The VLA observations, along with other evidence the researchers have uncovered, leads them to believe that, despite being much less massive than galactic cores, other double-star systems may be capable of ejecting material at speeds near that of light. The researchers reported their discovery in the September 1 issue of the journal Nature. "This discovery is one of the most valuable results of more than a decade and a half of observations at the VLA," said Dr. Miller Goss, assistant director of NRAO for VLA/VLBA operations. "We see these fast-moving jets of material throughout the universe, and they represent an important physical process. However, they're usually so far away that it's difficult to study them. This object, relatively nearby, offers the best opportunity yet to build a good understanding of how such jets actually work," Goss added. GRS 1915+105 was discovered in 1992 by an orbiting French- Russian X-ray observatory called SIGMA-GRANAT. It had not been found before because its X-rays are highly-energetic "hard" X-rays not regularly observed by satellites before then. Since its discovery, it has repeatedly been seen as a source of "hard" X- rays. Despite searching, the scientists have been unable to observe the object in visible light. Observations with the VLA in 1992 and 1993 showed that the object changed both its radio "brightness" and its apparent position in the sky, but it was then too faint at radio wavelengths for precise measurements. In March of 1994, the object began an outburst of strong radio emission just as the VLA had entered a configuration capable of its most precise positional measurements. Through March and April of 1994, Mirabel and Rodriguez were able to track the movement of the two condensations in the jets of material moving away from the object's core. They found that the core remained stationary, while the approaching condensation was apparently moving at 125 percent of the speed of light. After correcting for relativistic effects, they conclude that the ejected material actually is moving at 92 percent of light speed. Their calculations indicate that the pair of "blobs" they tracked were ejected from the core on March 19, during a period when the object was emitting more X-rays than usual. GRS 1915+105 somewhat resembles a famous astronomical object that was intensively studied in the late 1970s and early 1980s, called SS433. The VLA was used for many observations of SS433, which, astronomers believe, is also a double-star system with a dense, massive star as its centerpiece. SS433 has jets similar to those of GRS 1915+105, but the fastest motions detected in SS433's jets are only 26 percent the speed of light. Comparing it to quasars, which are believed to be phenomena associated with supermassive black holes at the centers of galaxies -- objects much larger and more massive than stars -- astronomers have called SS433 a "stellar microquasar." With kinetic energies 40 times those of SS433, GRS 1915+105 "appears to be a scaled up version" of the other object, Mirabel and Rodriguez say.
Brandes, Susanne; Mokhtari, Zeinab; Essig, Fabian; Hünniger, Kerstin; Kurzai, Oliver; Figge, Marc Thilo
2015-02-01
Time-lapse microscopy is an important technique to study the dynamics of various biological processes. The labor-intensive manual analysis of microscopy videos is increasingly replaced by automated segmentation and tracking methods. These methods are often limited to certain cell morphologies and/or cell stainings. In this paper, we present an automated segmentation and tracking framework that does not have these restrictions. In particular, our framework handles highly variable cell shapes and does not rely on any cell stainings. Our segmentation approach is based on a combination of spatial and temporal image variations to detect moving cells in microscopy videos. This method yields a sensitivity of 99% and a precision of 95% in object detection. The tracking of cells consists of different steps, starting from single-cell tracking based on a nearest-neighbor-approach, detection of cell-cell interactions and splitting of cell clusters, and finally combining tracklets using methods from graph theory. The segmentation and tracking framework was applied to synthetic as well as experimental datasets with varying cell densities implying different numbers of cell-cell interactions. We established a validation framework to measure the performance of our tracking technique. The cell tracking accuracy was found to be >99% for all datasets indicating a high accuracy for connecting the detected cells between different time points. Copyright © 2014 Elsevier B.V. All rights reserved.
Filament Advance Detection Sensor for Fused Deposition Modelling 3D Printers
Islán Marcos, Manuel
2018-01-01
The main purpose of this paper is to present a system to detect extrusion failures in fused deposition modelling (FDM) 3D printers by sensing that the filament is moving forward properly. After several years using these kind of machines, authors detected that there is not any system to detect the main problem in FDM machines. Authors thought in different sensors and used the weighted objectives method, one of the most common evaluation methods, for comparing design concepts based on an overall value per design concept. Taking into account the obtained scores of each specification, the best choice for this work is the optical encoder. Once the sensor is chosen, it is necessary to design de part where it will be installed without interfering with the normal function of the machine. To do it, photogrammetry scanning methodology was employed. The developed device perfectly detects the advance of the filament without affecting the normal operation of the machine. Also, it is achieved the primary objective of the system, avoiding loss of material, energy, and mechanical wear, keeping the premise of making a low-cost product that does not significantly increase the cost of the machine. This development has made it possible to use the printer with remains of coil filaments, which were not spent because they were not sufficient to complete an impression. Also, printing models in two colours with only one extruder has been enabled by this development. PMID:29747458