Sample records for multiple moving objects

  1. Self-motion impairs multiple-object tracking.

    PubMed

    Thomas, Laura E; Seiffert, Adriane E

    2010-10-01

    Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement impairs the ability to keep track of other moving objects. Participants attempted to track multiple targets while either moving around the tracking area or remaining in a fixed location. Participants' tracking performance was impaired when they moved to a new location during tracking, even when they were passively moved and when they did not see a shift in viewpoint. Self-motion impaired multiple-object tracking in both an immersive virtual environment and a real-world analog, but did not interfere with a difficult non-spatial tracking task. These results suggest that people use a common mechanism to track changes both to the location of moving objects around them and to keep track of their own location. Copyright 2010 Elsevier B.V. All rights reserved.

  2. Dynamic Binding of Identity and Location Information: A Serial Model of Multiple Identity Tracking

    ERIC Educational Resources Information Center

    Oksama, Lauri; Hyona, Jukka

    2008-01-01

    Tracking of multiple moving objects is commonly assumed to be carried out by a fixed-capacity parallel mechanism. The present study proposes a serial model (MOMIT) to explain performance accuracy in the maintenance of multiple moving objects with distinct identities. A serial refresh mechanism is postulated, which makes recourse to continuous…

  3. Real-time object detection, tracking and occlusion reasoning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Divakaran, Ajay; Yu, Qian; Tamrakar, Amir

    A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.

  4. A robust approach towards unknown transformation, regional adjacency graphs, multigraph matching, segmentation video frames from unnamed aerial vehicles (UAV)

    NASA Astrophysics Data System (ADS)

    Gohatre, Umakant Bhaskar; Patil, Venkat P.

    2018-04-01

    In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.

  5. How Many Objects are You Worth? Quantification of the Self-Motion Load on Multiple Object Tracking

    PubMed Central

    Thomas, Laura E.; Seiffert, Adriane E.

    2011-01-01

    Perhaps walking and chewing gum is effortless, but walking and tracking moving objects is not. Multiple object tracking is impaired by walking from one location to another, suggesting that updating location of the self puts demands on object tracking processes. Here, we quantified the cost of self-motion in terms of the tracking load. Participants in a virtual environment tracked a variable number of targets (1–5) among distractors while either staying in one place or moving along a path that was similar to the objects’ motion. At the end of each trial, participants decided whether a probed dot was a target or distractor. As in our previous work, self-motion significantly impaired performance in tracking multiple targets. Quantifying tracking capacity for each individual under move versus stay conditions further revealed that self-motion during tracking produced a cost to capacity of about 0.8 (±0.2) objects. Tracking your own motion is worth about one object, suggesting that updating the location of the self is similar, but perhaps slightly easier, than updating locations of objects. PMID:21991259

  6. Evidence against a speed limit in multiple-object tracking.

    PubMed

    Franconeri, S L; Lin, J Y; Pylyshyn, Z W; Fisher, B; Enns, J T

    2008-08-01

    Everyday tasks often require us to keep track of multiple objects in dynamic scenes. Past studies show that tracking becomes more difficult as objects move faster. In the present study, we show that this trade-off may not be due to increased speed itself but may, instead, be due to the increased crowding that usually accompanies increases in speed. Here, we isolate changes in speed from variations in crowding, by projecting a tracking display either onto a small area at the center of a hemispheric projection dome or onto the entire dome. Use of the larger display increased retinal image size and object speed by a factor of 4 but did not increase interobject crowding. Results showed that tracking accuracy was equally good in the large-display condition, even when the objects traveled far into the visual periphery. Accuracy was also not reduced when we tested object speeds that limited performance in the small-display condition. These results, along with a reinterpretation of past studies, suggest that we might be able to track multiple moving objects as fast as we can a single moving object, once the effect of object crowding is eliminated.

  7. Brain Activation during Spatial Updating and Attentive Tracking of Moving Targets

    ERIC Educational Resources Information Center

    Jahn, Georg; Wendt, Julia; Lotze, Martin; Papenmeier, Frank; Huff, Markus

    2012-01-01

    Keeping aware of the locations of objects while one is moving requires the updating of spatial representations. As long as the objects are visible, attentional tracking is sufficient, but knowing where objects out of view went in relation to one's own body involves an updating of spatial working memory. Here, multiple object tracking was employed…

  8. Sustained multifocal attentional enhancement of stimulus processing in early visual areas predicts tracking performance.

    PubMed

    Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K

    2013-03-20

    Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.

  9. Detecting and Analyzing Multiple Moving Objects in Crowded Environments with Coherent Motion Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheriyadat, Anil M.

    Understanding the world around us from large-scale video data requires vision systems that can perform automatic interpretation. While human eyes can unconsciously perceive independent objects in crowded scenes and other challenging operating environments, automated systems have difficulty detecting, counting, and understanding their behavior in similar scenes. Computer scientists at ORNL have a developed a technology termed as "Coherent Motion Region Detection" that invloves identifying multiple indepedent moving objects in crowded scenes by aggregating low-level motion cues extracted from moving objects. Humans and other species exploit such low-level motion cues seamlessely to perform perceptual grouping for visual understanding. The algorithm detectsmore » and tracks feature points on moving objects resulting in partial trajectories that span coherent 3D region in the space-time volume defined by the video. In the case of multi-object motion, many possible coherent motion regions can be constructed around the set of trajectories. The unique approach in the algorithm is to identify all possible coherent motion regions, then extract a subset of motion regions based on an innovative measure to automatically locate moving objects in crowded environments.The software reports snapshot of the object, count, and derived statistics ( count over time) from input video streams. The software can directly process videos streamed over the internet or directly from a hardware device (camera).« less

  10. Coordination of multiple robot arms

    NASA Technical Reports Server (NTRS)

    Barker, L. K.; Soloway, D.

    1987-01-01

    Kinematic resolved-rate control from one robot arm is extended to the coordinated control of multiple robot arms in the movement of an object. The structure supports the general movement of one axis system (moving reference frame) with respect to another axis system (control reference frame) by one or more robot arms. The grippers of the robot arms do not have to be parallel or at any pre-disposed positions on the object. For multiarm control, the operator chooses the same moving and control reference frames for each of the robot arms. Consequently, each arm then moves as though it were carrying out the commanded motions by itself.

  11. Constraints on Multiple Object Tracking in Williams Syndrome: How Atypical Development Can Inform Theories of Visual Processing

    ERIC Educational Resources Information Center

    Ferrara, Katrina; Hoffman, James E.; O'Hearn, Kirsten; Landau, Barbara

    2016-01-01

    The ability to track moving objects is a crucial skill for performance in everyday spatial tasks. The tracking mechanism depends on representation of moving items as coherent entities, which follow the spatiotemporal constraints of objects in the world. In the present experiment, participants tracked 1 to 4 targets in a display of 8 identical…

  12. Attentional Signatures of Perception: Multiple Object Tracking Reveals the Automaticity of Contour Interpolation

    ERIC Educational Resources Information Center

    Keane, Brian P.; Mettler, Everett; Tsoi, Vicky; Kellman, Philip J.

    2011-01-01

    Multiple object tracking (MOT) is an attentional task wherein observers attempt to track multiple targets among moving distractors. Contour interpolation is a perceptual process that fills-in nonvisible edges on the basis of how surrounding edges (inducers) are spatiotemporally related. In five experiments, we explored the automaticity of…

  13. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  14. Multiple-Object Tracking in Children: The "Catch the Spies" Task

    ERIC Educational Resources Information Center

    Trick, L.M.; Jaspers-Fayer, F.; Sethi, N.

    2005-01-01

    Multiple-object tracking involves simultaneously tracking positions of a number of target-items as they move among distractors. The standard version of the task poses special challenges for children, demanding extended concentration and the ability to distinguish targets from identical-looking distractors, and may thus underestimate children's…

  15. Real Objects Can Impede Conditional Reasoning but Augmented Objects Do Not.

    PubMed

    Sato, Yuri; Sugimoto, Yutaro; Ueda, Kazuhiro

    2018-03-01

    In this study, Knauff and Johnson-Laird's (2002) visual impedance hypothesis (i.e., mental representations with irrelevant visual detail can impede reasoning) is applied to the domain of external representations and diagrammatic reasoning. We show that the use of real objects and augmented real (AR) objects can control human interpretation and reasoning about conditionals. As participants made inferences (e.g., an invalid one from "if P then Q" to "P"), they also moved objects corresponding to premises. Participants who moved real objects made more invalid inferences than those who moved AR objects and those who did not manipulate objects (there was no significant difference between the last two groups). Our results showed that real objects impeded conditional reasoning, but AR objects did not. These findings are explained by the fact that real objects may over-specify a single state that exists, while AR objects suggest multiple possibilities. Copyright © 2017 Cognitive Science Society, Inc.

  16. Exhausting Attentional Tracking Resources with a Single Fast-Moving Object

    ERIC Educational Resources Information Center

    Holcombe, Alex O.; Chen, Wei-Ying

    2012-01-01

    Driving on a busy road, eluding a group of predators, or playing a team sport involves keeping track of multiple moving objects. In typical laboratory tasks, the number of visual targets that humans can track is about four. Three types of theories have been advanced to explain this limit. The fixed-limit theory posits a set number of attentional…

  17. 3D shape measurement of moving object with FFT-based spatial matching

    NASA Astrophysics Data System (ADS)

    Guo, Qinghua; Ruan, Yuxi; Xi, Jiangtao; Song, Limei; Zhu, Xinjun; Yu, Yanguang; Tong, Jun

    2018-03-01

    This work presents a new technique for 3D shape measurement of moving object in translational motion, which finds applications in online inspection, quality control, etc. A low-complexity 1D fast Fourier transform (FFT)-based spatial matching approach is devised to obtain accurate object displacement estimates, and it is combined with single shot fringe pattern prolometry (FPP) techniques to achieve high measurement performance with multiple captured images through coherent combining. The proposed technique overcomes some limitations of existing ones. Specifically, the placement of marks on object surface and synchronization between projector and camera are not needed, the velocity of the moving object is not required to be constant, and there is no restriction on the movement trajectory. Both simulation and experimental results demonstrate the effectiveness of the proposed technique.

  18. Eye Movements during Multiple Object Tracking: Where Do Participants Look?

    ERIC Educational Resources Information Center

    Fehd, Hilda M.; Seiffert, Adriane E.

    2008-01-01

    Similar to the eye movements you might make when viewing a sports game, this experiment investigated where participants tend to look while keeping track of multiple objects. While eye movements were recorded, participants tracked either 1 or 3 of 8 red dots that moved randomly within a square box on a black background. Results indicated that…

  19. Localization and tracking of moving objects in two-dimensional space by echolocation.

    PubMed

    Matsuo, Ikuo

    2013-02-01

    Bats use frequency-modulated echolocation to identify and capture moving objects in real three-dimensional space. Experimental evidence indicates that bats are capable of locating static objects with a range accuracy of less than 1 μs. A previously introduced model estimates ranges of multiple, static objects using linear frequency modulation (LFM) sound and Gaussian chirplets with a carrier frequency compatible with bat emission sweep rates. The delay time for a single object was estimated with an accuracy of about 1.3 μs by measuring the echo at a low signal-to-noise ratio (SNR). The range accuracy was dependent not only on the SNR but also the Doppler shift, which was dependent on the movements. However, it was unclear whether this model could estimate the moving object range at each timepoint. In this study, echoes were measured from the rotating pole at two receiving points by intermittently emitting LFM sounds. The model was shown to localize moving objects in two-dimensional space by accurately estimating the object's range at each timepoint.

  20. Multi-view video segmentation and tracking for video surveillance

    NASA Astrophysics Data System (ADS)

    Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj

    2009-05-01

    Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.

  1. Normal aging delays and compromises early multifocal visual attention during object tracking.

    PubMed

    Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman

    2013-02-01

    Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.

  2. Dynamic Dependence Analysis : Modeling and Inference of Changing Dependence Among Multiple Time-Series

    DTIC Science & Technology

    2009-06-01

    isolation. In addition to being inherently multi-modal, human perception takes advantages of multiple sources of information within a single modality...restric- tion was reasonable for the applications we looked at. However, consider using a TIM to model a teacher student relationship among moving objects...That is, imagine one teacher object demonstrating a behavior for a student object. The student can observe the teacher and then recreate the behavior

  3. Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space

    PubMed Central

    Chen, Min; Hashimoto, Koichi

    2017-01-01

    Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189

  4. Grouping and trajectory storage in multiple object tracking: impairments due to common item motions.

    PubMed

    Suganuma, Mutsumi; Yokosawa, Kazuhiko

    2006-01-01

    In our natural viewing, we notice that objects change their locations across space and time. However, there has been relatively little consideration of the role of motion information in the construction and maintenance of object representations. We investigated this question in the context of the multiple object tracking (MOT) paradigm, wherein observers must keep track of target objects as they move randomly amid featurally identical distractors. In three experiments, we observed impairments in tracking ability when the motions of the target and distractor items shared particular properties. Specifically, we observed impairments when the target and distractor items were in a chasing relationship or moved in a uniform direction. Surprisingly, tracking ability was impaired by these manipulations even when observers failed to notice them. Our results suggest that differentiable trajectory information is an important factor in successful performance of MOT tasks. More generally, these results suggest that various types of common motion can serve as cues to form more global object representations even in the absence of other grouping cues.

  5. Cortical Circuit for Binding Object Identity and Location During Multiple-Object Tracking

    PubMed Central

    Nummenmaa, Lauri; Oksama, Lauri; Glerean, Erico; Hyönä, Jukka

    2017-01-01

    Abstract Sustained multifocal attention for moving targets requires binding object identities with their locations. The brain mechanisms of identity-location binding during attentive tracking have remained unresolved. In 2 functional magnetic resonance imaging experiments, we measured participants’ hemodynamic activity during attentive tracking of multiple objects with equivalent (multiple-object tracking) versus distinct (multiple identity tracking, MIT) identities. Task load was manipulated parametrically. Both tasks activated large frontoparietal circuits. MIT led to significantly increased activity in frontoparietal and temporal systems subserving object recognition and working memory. These effects were replicated when eye movements were prohibited. MIT was associated with significantly increased functional connectivity between lateral temporal and frontal and parietal regions. We propose that coordinated activity of this network subserves identity-location binding during attentive tracking. PMID:27913430

  6. Image analysis of multiple moving wood pieces in real time

    NASA Astrophysics Data System (ADS)

    Wang, Weixing

    2006-02-01

    This paper presents algorithms for image processing and image analysis of wood piece materials. The algorithms were designed for auto-detection of wood piece materials on a moving conveyor belt or a truck. When wood objects on moving, the hard task is to trace the contours of the objects in n optimal way. To make the algorithms work efficiently in the plant, a flexible online system was designed and developed, which mainly consists of image acquisition, image processing, object delineation and analysis. A number of newly-developed algorithms can delineate wood objects with high accuracy and high speed, and in the wood piece analysis part, each wood piece can be characterized by a number of visual parameters which can also be used for constructing experimental models directly in the system.

  7. Tracking multiple objects is limited only by object spacing, not by speed, time, or capacity.

    PubMed

    Franconeri, S L; Jonathan, S V; Scimeca, J M

    2010-07-01

    In dealing with a dynamic world, people have the ability to maintain selective attention on a subset of moving objects in the environment. Performance in such multiple-object tracking is limited by three primary factors-the number of objects that one can track, the speed at which one can track them, and how close together they can be. We argue that this last limit, of object spacing, is the root cause of all performance constraints in multiple-object tracking. In two experiments, we found that as long as the distribution of object spacing is held constant, tracking performance is unaffected by large changes in object speed and tracking time. These results suggest that barring object-spacing constraints, people could reliably track an unlimited number of objects as fast as they could track a single object.

  8. Continuous motion scan ptychography: characterization for increased speed in coherent x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Junjing; Nashed, Youssef S. G.; Chen, Si

    2015-01-01

    Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object's complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous "fly-scan" mode for ptychographic data collection in whichmore » the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.« less

  9. Continuous motion scan ptychography: characterization for increased speed in coherent x-ray imaging.

    PubMed

    Deng, Junjing; Nashed, Youssef S G; Chen, Si; Phillips, Nicholas W; Peterka, Tom; Ross, Rob; Vogt, Stefan; Jacobsen, Chris; Vine, David J

    2015-03-09

    Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object's complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous "fly-scan" mode for ptychographic data collection in which the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.

  10. Calibration of asynchronous smart phone cameras from moving objects

    NASA Astrophysics Data System (ADS)

    Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel

    2015-04-01

    Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.

  11. Distributed proximity sensor system having embedded light emitters and detectors

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan (Inventor)

    1990-01-01

    A distributed proximity sensor system is provided with multiple photosensitive devices and light emitters embedded on the surface of a robot hand or other moving member in a geometric pattern. By distributing sensors and emitters capable of detecting distances and angles to points on the surface of an object from known points in the geometric pattern, information is obtained for achieving noncontacting shape and distance perception, i.e., for automatic determination of the object's shape, direction and distance, as well as the orientation of the object relative to the robot hand or other moving member.

  12. Interactive Multiple Object Tracking (iMOT)

    PubMed Central

    Thornton, Ian M.; Bülthoff, Heinrich H.; Horowitz, Todd S.; Rynning, Aksel; Lee, Seong-Whan

    2014-01-01

    We introduce a new task for exploring the relationship between action and attention. In this interactive multiple object tracking (iMOT) task, implemented as an iPad app, participants were presented with a display of multiple, visually identical disks which moved independently. The task was to prevent any collisions during a fixed duration. Participants could perturb object trajectories via the touchscreen. In Experiment 1, we used a staircase procedure to measure the ability to control moving objects. Object speed was set to 1°/s. On average participants could control 8.4 items without collision. Individual control strategies were quite variable, but did not predict overall performance. In Experiment 2, we compared iMOT with standard MOT performance using identical displays. Object speed was set to 2°/s. Participants could reliably control more objects (M = 6.6) than they could track (M = 4.0), but performance in the two tasks was positively correlated. In Experiment 3, we used a dual-task design. Compared to single-task baseline, iMOT performance decreased and MOT performance increased when the two tasks had to be completed together. Overall, these findings suggest: 1) There is a clear limit to the number of items that can be simultaneously controlled, for a given speed and display density; 2) participants can control more items than they can track; 3) task-relevant action appears not to disrupt MOT performance in the current experimental context. PMID:24498288

  13. Sensor modeling and demonstration of a multi-object spectrometer for performance-driven sensing

    NASA Astrophysics Data System (ADS)

    Kerekes, John P.; Presnar, Michael D.; Fourspring, Kenneth D.; Ninkov, Zoran; Pogorzala, David R.; Raisanen, Alan D.; Rice, Andrew C.; Vasquez, Juan R.; Patel, Jeffrey P.; MacIntyre, Robert T.; Brown, Scott D.

    2009-05-01

    A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a highfidelity, hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track moving vehicle targets.

  14. Position And Force Control For Multiple-Arm Robots

    NASA Technical Reports Server (NTRS)

    Hayati, Samad A.

    1988-01-01

    Number of arms increased without introducing undue complexity. Strategy and computer architecture developed for simultaneous control of positions of number of robot arms manipulating same object and of forces and torques that arms exert on object. Scheme enables coordinated manipulation of object, causing it to move along assigned trajectory and be subjected to assigned internal forces and torques.

  15. Visual context modulates potentiation of grasp types during semantic object categorization.

    PubMed

    Kalénine, Solène; Shapiro, Allison D; Flumini, Andrea; Borghi, Anna M; Buxbaum, Laurel J

    2014-06-01

    Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use-compatible, as compared with move-compatible, contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions.

  16. Visual context modulates potentiation of grasp types during semantic object categorization

    PubMed Central

    Kalénine, Solène; Shapiro, Allison D.; Flumini, Andrea; Borghi, Anna M.; Buxbaum, Laurel J.

    2013-01-01

    Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it, but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use- compared to move-compatible contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions. PMID:24186270

  17. Continuous motion scan ptychography: Characterization for increased speed in coherent x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deng, Junjing; Nashed, Youssef S. G.; Chen, Si

    Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object’s complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous “fly-scan” mode for ptychographic data collection in whichmore » the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.« less

  18. Continuous motion scan ptychography: Characterization for increased speed in coherent x-ray imaging

    DOE PAGES

    Deng, Junjing; Nashed, Youssef S. G.; Chen, Si; ...

    2015-02-23

    Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object’s complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous “fly-scan” mode for ptychographic data collection in whichmore » the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.« less

  19. Spatiotemporal motion boundary detection and motion boundary velocity estimation for tracking moving objects with a moving camera: a level sets PDEs approach with concurrent camera motion compensation.

    PubMed

    Feghali, Rosario; Mitiche, Amar

    2004-11-01

    The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.

  20. Developmental Profiles for Multiple Object Tracking and Spatial Memory: Typically Developing Preschoolers and People with Williams Syndrome

    ERIC Educational Resources Information Center

    O'Hearn, Kirsten; Hoffman, James E.; Landau, Barbara

    2010-01-01

    The ability to track moving objects, a crucial skill for mature performance on everyday spatial tasks, has been hypothesized to require a specialized mechanism that may be available in infancy (i.e. indexes). Consistent with the idea of specialization, our previous work showed that object tracking was more impaired than a matched spatial memory…

  1. Multiple targets detection method in detection of UWB through-wall radar

    NASA Astrophysics Data System (ADS)

    Yang, Xiuwei; Yang, Chuanfa; Zhao, Xingwen; Tian, Xianzhong

    2017-11-01

    In this paper, the problems and difficulties encountered in the detection of multiple moving targets by UWB radar are analyzed. The experimental environment and the penetrating radar system are established. An adaptive threshold method based on local area is proposed to effectively filter out clutter interference The objective of the moving target is analyzed, and the false target is further filtered out by extracting the target feature. Based on the correlation between the targets, the target matching algorithm is proposed to improve the detection accuracy. Finally, the effectiveness of the above method is verified by practical experiment.

  2. Some recent developments of the immersed interface method for flow simulation

    NASA Astrophysics Data System (ADS)

    Xu, Sheng

    2017-11-01

    The immersed interface method is a general methodology for solving PDEs subject to interfaces. In this talk, I will give an overview of some recent developments of the method toward the enhancement of its robustness for flow simulation. In particular, I will present with numerical results how to capture boundary conditions on immersed rigid objects, how to adopt interface triangulation in the method, and how to parallelize the method for flow with moving objects. With these developments, the immersed interface method can achieve accurate and efficient simulation of a flow involving multiple moving complex objects. Thanks to NSF for the support of this work under Grant NSF DMS 1320317.

  3. Detecting multiple moving objects in crowded environments with coherent motion regions

    DOEpatents

    Cheriyadat, Anil M.; Radke, Richard J.

    2013-06-11

    Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.

  4. Independent motion detection with a rival penalized adaptive particle filter

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Hübner, Wolfgang; Arens, Michael

    2014-10-01

    Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.

  5. Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects.

    PubMed

    Kang, Ziho; Mandal, Saptarshi; Crutchfield, Jerry; Millan, Angel; McClung, Sarah N

    2016-01-01

    Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance.

  6. Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects

    PubMed Central

    Mandal, Saptarshi

    2016-01-01

    Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance. PMID:27725830

  7. Tracking moving targets behind a scattering medium via speckle correlation.

    PubMed

    Guo, Chengfei; Liu, Jietao; Wu, Tengfei; Zhu, Lei; Shao, Xiaopeng

    2018-02-01

    Tracking moving targets behind a scattering medium is a challenge, and it has many important applications in various fields. Owing to the multiple scattering, instead of the object image, only a random speckle pattern can be received on the camera when light is passing through highly scattering layers. Significantly, an important feature of a speckle pattern has been found, and it showed the target information can be derived from the speckle correlation. In this work, inspired by the notions used in computer vision and deformation detection, by specific simulations and experiments, we demonstrate a simple object tracking method, in which by using the speckle correlation, the movement of a hidden object can be tracked in the lateral direction and axial direction. In addition, the rotation state of the moving target can also be recognized by utilizing the autocorrelation of a speckle. This work will be beneficial for biomedical applications in the fields of quantitative analysis of the working mechanisms of a micro-object and the acquisition of dynamical information of the micro-object motion.

  8. An Approach to Extract Moving Objects from Mls Data Using a Volumetric Background Representation

    NASA Astrophysics Data System (ADS)

    Gehrung, J.; Hebel, M.; Arens, M.; Stilla, U.

    2017-05-01

    Data recorded by mobile LiDAR systems (MLS) can be used for the generation and refinement of city models or for the automatic detection of long-term changes in the public road space. Since for this task only static structures are of interest, all mobile objects need to be removed. This work presents a straightforward but powerful approach to remove the subclass of moving objects. A probabilistic volumetric representation is utilized to separate MLS measurements recorded by a Velodyne HDL-64E into mobile objects and static background. The method was subjected to a quantitative and a qualitative examination using multiple datasets recorded by a mobile mapping platform. The results show that depending on the chosen octree resolution 87-95% of the measurements are labeled correctly.

  9. Hybrid foraging search: Searching for multiple instances of multiple types of target.

    PubMed

    Wolfe, Jeremy M; Aizenman, Avigael M; Boettcher, Sage E P; Cain, Matthew S

    2016-02-01

    This paper introduces the "hybrid foraging" paradigm. In typical visual search tasks, observers search for one instance of one target among distractors. In hybrid search, observers search through visual displays for one instance of any of several types of target held in memory. In foraging search, observers collect multiple instances of a single target type from visual displays. Combining these paradigms, in hybrid foraging tasks observers search visual displays for multiple instances of any of several types of target (as might be the case in searching the kitchen for dinner ingredients or an X-ray for different pathologies). In the present experiment, observers held 8-64 target objects in memory. They viewed displays of 60-105 randomly moving photographs of objects and used the computer mouse to collect multiple targets before choosing to move to the next display. Rather than selecting at random among available targets, observers tended to collect items in runs of one target type. Reaction time (RT) data indicate searching again for the same item is more efficient than searching for any other targets, held in memory. Observers were trying to maximize collection rate. As a result, and consistent with optimal foraging theory, they tended to leave 25-33% of targets uncollected when moving to the next screen/patch. The pattern of RTs shows that while observers were collecting a target item, they had already begun searching memory and the visual display for additional targets, making the hybrid foraging task a useful way to investigate the interaction of visual and memory search. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Hybrid foraging search: Searching for multiple instances of multiple types of target

    PubMed Central

    Wolfe, Jeremy M.; Aizenman, Avigael M.; Boettcher, Sage E.P.; Cain, Matthew S.

    2016-01-01

    This paper introduces the “hybrid foraging” paradigm. In typical visual search tasks, observers search for one instance of one target among distractors. In hybrid search, observers search through visual displays for one instance of any of several types of target held in memory. In foraging search, observers collect multiple instances of a single target type from visual displays. Combining these paradigms, in hybrid foraging tasks observers search visual displays for multiple instances of any of several types of target (as might be the case in searching the kitchen for dinner ingredients or an X-ray for different pathologies). In the present experiment, observers held 8–64 targets objects in memory. They viewed displays of 60–105 randomly moving photographs of objects and used the computer mouse to collect multiple targets before choosing to move to the next display. Rather than selecting at random among available targets, observers tended to collect items in runs of one target type. Reaction time (RT) data indicate searching again for the same item is more efficient than searching for any other targets, held in memory. Observers were trying to maximize collection rate. As a result, and consistent with optimal foraging theory, they tended to leave 25–33% of targets uncollected when moving to the next screen/patch. The pattern of RTs shows that while observers were collecting a target item, they had already begun searching memory and the visual display for additional targets, making the hybrid foraging task a useful way to investigate the interaction of visual and memory search. PMID:26731644

  11. Attentional enhancement during multiple-object tracking.

    PubMed

    Drew, Trafton; McCollough, Andrew W; Horowitz, Todd S; Vogel, Edward K

    2009-04-01

    What is the role of attention in multiple-object tracking? Does attention enhance target representations, suppress distractor representations, or both? It is difficult to ask this question in a purely behavioral paradigm without altering the very attentional allocation one is trying to measure. In the present study, we used event-related potentials to examine the early visual evoked responses to task-irrelevant probes without requiring an additional detection task. Subjects tracked two targets among four moving distractors and four stationary distractors. Brief probes were flashed on targets, moving distractors, stationary distractors, or empty space. We obtained a significant enhancement of the visually evoked P1 and N1 components (approximately 100-150 msec) for probes on targets, relative to distractors. Furthermore, good trackers showed larger differences between target and distractor probes than did poor trackers. These results provide evidence of early attentional enhancement of tracked target items and also provide a novel approach to measuring attentional allocation during tracking.

  12. 3D Backscatter Imaging System

    NASA Technical Reports Server (NTRS)

    Whitaker, Ross (Inventor); Turner, D. Clark (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  13. Finding Kuiper Belt Objects Below the Detection Limit

    NASA Astrophysics Data System (ADS)

    Whidden, Peter; Kalmbach, Bryce; Bektesevic, Dino; Connolly, Andrew; Jones, Lynne; Smotherman, Hayden; Becker, Andrew

    2018-01-01

    We demonstrate a novel approach for uncovering the signatures of moving objects (e.g. Kuiper Belt Objects) below the detection thresholds of single astronomical images. To do so, we will employ a matched filter moving at specific rates of proposed orbits through a time-domain dataset. This is analogous to the better-known "shift-and-stack" method; however it uses neither direct shifting nor stacking of the image pixels. Instead of resampling the raw pixels to create an image stack, we will instead integrate the object detection probabilities across multiple single-epoch images to accrue support for a proposed orbit. The filtering kernel provides a measure of the probability that an object is present along a given orbit, and enables the user to make principled decisions about when the search has been successful, and when it may be terminated. The results we present here utilize GPUs to speed up the search by two orders of magnitudes over CPU implementations.

  14. Visual attention is required for multiple object tracking.

    PubMed

    Tran, Annie; Hoffman, James E

    2016-12-01

    In the multiple object tracking task, participants attempt to keep track of a moving set of target objects embedded in an identical set of moving distractors. Depending on several display parameters, observers are usually only able to accurately track 3 to 4 objects. Various proposals attribute this limit to a fixed number of discrete indexes (Pylyshyn, 1989), limits in visual attention (Cavanagh & Alvarez, 2005), or "architectural limits" in visual cortical areas (Franconeri, 2013). The present set of experiments examined the specific role of visual attention in tracking using a dual-task methodology in which participants tracked objects while identifying letter probes appearing on the tracked objects and distractors. As predicted by the visual attention model, probe identification was faster and/or more accurate when probes appeared on tracked objects. This was the case even when probes were more than twice as likely to appear on distractors suggesting that some minimum amount of attention is required to maintain accurate tracking performance. When the need to protect tracking accuracy was relaxed, participants were able to allocate more attention to distractors when probes were likely to appear there but only at the expense of large reductions in tracking accuracy. A final experiment showed that people attend to tracked objects even when letters appearing on them are task-irrelevant, suggesting that allocation of attention to tracked objects is an obligatory process. These results support the claim that visual attention is required for tracking objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Controlling the motion of multiple objects on a Chladni plate

    NASA Astrophysics Data System (ADS)

    Zhou, Quan; Sariola, Veikko; Latifi, Kourosh; Liimatainen, Ville

    2016-09-01

    The origin of the idea of moving objects by acoustic vibration can be traced back to 1787, when Ernst Chladni reported the first detailed studies on the aggregation of sand onto nodal lines of a vibrating plate. Since then and to this date, the prevailing view has been that the particle motion out of nodal lines is random, implying uncontrollability. But how random really is the out-of-nodal-lines motion on a Chladni plate? Here we show that the motion is sufficiently regular to be statistically modelled, predicted and controlled. By playing carefully selected musical notes, we can control the position of multiple objects simultaneously and independently using a single acoustic actuator. Our method allows independent trajectory following, pattern transformation and sorting of multiple miniature objects in a wide range of materials, including electronic components, water droplets loaded on solid carriers, plant seeds, candy balls and metal parts.

  16. Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features

    NASA Astrophysics Data System (ADS)

    Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique

    2011-12-01

    We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.

  17. Symmetric caging formation for convex polygonal object transportation by multiple mobile robots based on fuzzy sliding mode control.

    PubMed

    Dai, Yanyan; Kim, YoonGu; Wee, SungGil; Lee, DongHa; Lee, SukGyu

    2016-01-01

    In this paper, the problem of object caging and transporting is considered for multiple mobile robots. With the consideration of minimizing the number of robots and decreasing the rotation of the object, the proper points are calculated and assigned to the multiple mobile robots to allow them to form a symmetric caging formation. The caging formation guarantees that all of the Euclidean distances between any two adjacent robots are smaller than the minimal width of the polygonal object so that the object cannot escape. In order to avoid collision among robots, the parameter of the robots radius is utilized to design the caging formation, and the A⁎ algorithm is used so that mobile robots can move to the proper points. In order to avoid obstacles, the robots and the object are regarded as a rigid body to apply artificial potential field method. The fuzzy sliding mode control method is applied for tracking control of the nonholonomic mobile robots. Finally, the simulation and experimental results show that multiple mobile robots are able to cage and transport the polygonal object to the goal position, avoiding obstacles. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  18. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  19. A parametric LQ approach to multiobjective control system design

    NASA Technical Reports Server (NTRS)

    Kyr, Douglas E.; Buchner, Marc

    1988-01-01

    The synthesis of a constant parameter output feedback control law of constrained structure is set in a multiple objective linear quadratic regulator (MOLQR) framework. The use of intuitive objective functions such as model-following ability and closed-loop trajectory sensitivity, allow multiple objective decision making techniques, such as the surrogate worth tradeoff method, to be applied. For the continuous-time deterministic problem with an infinite time horizon, dynamic compensators as well as static output feedback controllers can be synthesized using a descent Anderson-Moore algorithm modified to impose linear equality constraints on the feedback gains by moving in feasible directions. Results of three different examples are presented, including a unique reformulation of the sensitivity reduction problem.

  20. Fan filters, the 3-D Radon transform, and image sequence analysis.

    PubMed

    Marzetta, T L

    1994-01-01

    This paper develops a theory for the application of fan filters to moving objects. In contrast to previous treatments of the subject based on the 3-D Fourier transform, simplicity and insight are achieved by using the 3-D Radon transform. With this point of view, the Radon transform decomposes the image sequence into a set of plane waves that are parameterized by a two-component slowness vector. Fan filtering is equivalent to a multiplication in the Radon transform domain by a slowness response function, followed by an inverse Radon transform. The plane wave representation of a moving object involves only a restricted set of slownesses such that the inner product of the plane wave slowness vector and the moving object velocity vector is equal to one. All of the complexity in the application of fan filters to image sequences results from the velocity-slowness mapping not being one-to-one; therefore, the filter response cannot be independently specified at all velocities. A key contribution of this paper is to elucidate both the power and the limitations of fan filtering in this new application. A potential application of 3-D fan filters is in the detection of moving targets in clutter and noise. For example, an appropriately designed fan filter can reject perfectly all moving objects whose speed, irrespective of heading, is less than a specified cut-off speed, with only minor attenuation of significantly faster objects. A simple geometric construction determines the response of the filter for speeds greater than the cut-off speed.

  1. Action Type and Goal Type Modulate Goal-Directed Gaze Shifts in 14-Month-Old Infants

    ERIC Educational Resources Information Center

    Gredeback, Gustaf; Stasiewicz, Dorota; Falck-Ytter, Terje; von Hofsten, Claes; Rosander, Kerstin

    2009-01-01

    Ten- and 14-month-old infants' gaze was recorded as the infants observed videos of different hand actions directed toward multiple goals. Infants observed an actor who (a) reached for objects and displaced them, (b) reached for objects and placed them inside containers, or (c) moved his fisted hand. Fourteen-month-olds, but not 10-month-olds,…

  2. Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm

    PubMed Central

    Tombu, Michael

    2014-01-01

    People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target–distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking—one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone. PMID:21264704

  3. Tracking planets and moons: mechanisms of object tracking revealed with a new paradigm.

    PubMed

    Tombu, Michael; Seiffert, Adriane E

    2011-04-01

    People can attend to and track multiple moving objects over time. Cognitive theories of this ability emphasize location information and differ on the importance of motion information. Results from several experiments have shown that increasing object speed impairs performance, although speed was confounded with other properties such as proximity of objects to one another. Here, we introduce a new paradigm to study multiple object tracking in which object speed and object proximity were manipulated independently. Like the motion of a planet and moon, each target-distractor pair rotated about both a common local point as well as the center of the screen. Tracking performance was strongly affected by object speed even when proximity was controlled. Additional results suggest that two different mechanisms are used in object tracking--one sensitive to speed and proximity and the other sensitive to the number of distractors. These observations support models of object tracking that include information about object motion and reject models that use location alone.

  4. A Class of CFAR Detectors Implemented in the SAR-GMTI Processor gmtipro2: Mathematical Formulation of the Algorithms

    DTIC Science & Technology

    2015-02-01

    Right of Canada as represented by the Minister of National Defence, 2015 c© Sa Majesté la Reine (en droit du Canada), telle que représentée par le...References [1] Chiu, S. (2010), Moving target parameter estimation for RADARSAT-2 Moving Object Detection EXperiment (MODEX), International Journal of...of multiple sinusoids in noise, In Proceedings. (ICASSP ’01). 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 5

  5. Influence of local objects on hippocampal representations: landmark vectors and memory

    PubMed Central

    Deshmukh, Sachin S.; Knierim, James J.

    2013-01-01

    The hippocampus is thought to represent nonspatial information in the context of spatial information. An animal can derive both spatial information as well as nonspatial information from the objects (landmarks) it encounters as it moves around in an environment. Here, we demonstrate correlates of both object-derived spatial as well as nonspatial information in the hippocampus of rats foraging in the presence of objects. We describe a new form of CA1 place cells, called landmark-vector cells, that encode spatial locations as a vector relationship to local landmarks. Such landmark vector relationships can be dynamically encoded. Of the 26 CA1 neurons that developed new fields in the course of a day’s recording sessions, in 8 cases the new fields were located at a similar distance and direction from a landmark as the initial field was located relative to a different landmark. We also demonstrate object-location memory in the hippocampus. When objects were removed from an environment or moved to new locations, a small number of neurons in CA1 and CA3 increased firing at the locations where the objects used to be. In some neurons, this increase occurred only in one location, indicating object +place conjunctive memory; in other neurons the increase in firing was seen at multiple locations where an object used to be. Taken together, these results demonstrate that the spatially restricted firing of hippocampal neurons encode multiple types of information regarding the relationship between an animal’s location and the location of objects in its environment. PMID:23447419

  6. The paddle move commonly used in magic tricks as a means for analysing the perceptual limits of combined motion trajectories.

    PubMed

    Hergovich, Andreas; Gröbl, Kristian; Carbon, Claus-Christian

    2011-01-01

    Following Gustav Kuhn's inspiring technique of using magicians' acts as a source of insight into cognitive sciences, we used the 'paddle move' for testing the psychophysics of combined movement trajectories. The paddle move is a standard technique in magic consisting of a combined rotating and tilting movement. Careful control of the mutual speed parameters of the two movements makes it possible to inhibit the perception of the rotation, letting the 'magic' effect emerge--a sudden change of the tilted object. By using 3-D animated computer graphics we analysed the interaction of different angular speeds and the object shape/size parameters in evoking this motion disappearance effect. An angular speed of 540 degrees s(-1) (1.5 rev. s(-1)) sufficed to inhibit the perception of the rotary movement with the smallest object showing the strongest effect. 90.7% of the 172 participants were not able to perceive the rotary movement at an angular speed of 1125 degrees s(-1) (3.125 rev. s(-1)). Further analysis by multiple linear regression revealed major influences on the effectiveness of the magic trick of object height and object area, demonstrating the applicability of analysing key factors of magic tricks to reveal limits of the perceptual system.

  7. Estimating vehicle height using homographic projections

    DOEpatents

    Cunningham, Mark F; Fabris, Lorenzo; Gee, Timothy F; Ghebretati, Jr., Frezghi H; Goddard, James S; Karnowski, Thomas P; Ziock, Klaus-peter

    2013-07-16

    Multiple homography transformations corresponding to different heights are generated in the field of view. A group of salient points within a common estimated height range is identified in a time series of video images of a moving object. Inter-salient point distances are measured for the group of salient points under the multiple homography transformations corresponding to the different heights. Variations in the inter-salient point distances under the multiple homography transformations are compared. The height of the group of salient points is estimated to be the height corresponding to the homography transformation that minimizes the variations.

  8. Examining Multiple Parenting Behaviors on Young Children's Dietary Fat Consumption

    ERIC Educational Resources Information Center

    Eisenberg, Christina M.; Ayala, Guadalupe X.; Crespo, Noe C.; Lopez, Nanette V.; Zive, Michelle Murphy; Corder, Kirsten; Wood, Christine; Elder, John P.

    2012-01-01

    Objective: To understand the association between parenting and children's dietary fat consumption, this study tested a comprehensive model of parenting that included parent household rules, parent modeling of rules, parent mediated behaviors, and parent support. Design: Cross-sectional. Setting: Baseline data from the "MOVE/me Muevo"…

  9. Multiple wavelength interferometry for distance measurements of moving objects with nanometer uncertainty

    NASA Astrophysics Data System (ADS)

    Kuschmierz, R.; Czarske, J.; Fischer, A.

    2014-08-01

    Optical measurement techniques offer great opportunities in diverse applications, such as lathe monitoring and microfluidics. Doppler-based interferometric techniques enable simultaneous measurement of the lateral velocity and axial distance of a moving object. However, there is a complementarity between the unambiguous axial measurement range and the uncertainty of the distance. Therefore, we present an extended sensor setup, which provides an unambiguous axial measurement range of 1 mm while achieving uncertainties below 100 nm. Measurements at a calibration system are performed. When using a pinhole for emulating a single scattering particle, the tumbling motion of the rotating object is resolved with a distance uncertainty of 50 nm. For measurements at the rough surface, the distance uncertainty amounts to 280 nm due to a lower signal-to-noise ratio. Both experimental results are close to the respective Cramér-Rao bound, which is derived analytically for both surface and single particle measurements.

  10. Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR

    NASA Astrophysics Data System (ADS)

    Sroka, Adam; Chan, Susan; Warburton, Ryan; Gariepy, Genevieve; Henderson, Robert; Leach, Jonathan; Faccio, Daniele; Lee, Stephen T.

    2016-05-01

    The ability to detect motion and to track a moving object that is hidden around a corner or behind a wall provides a crucial advantage when physically going around the obstacle is impossible or dangerous. One recently demonstrated approach to achieving this goal makes use of non-line-of-sight picosecond pulse laser ranging. This approach has recently become interesting due to the availability of single-photon avalanche diode (SPAD) receivers with picosecond time resolution. We present a time-resolved non-sequential ray-tracing model and its application to indirect line-of-sight detection of moving targets. The model makes use of the Zemax optical design programme's capabilities in stray light analysis where it traces large numbers of rays through multiple random scattering events in a 3D non-sequential environment. Our model then reconstructs the generated multi-segment ray paths and adds temporal analysis. Validation of this model against experimental results is shown. We then exercise the model to explore the limits placed on system design by available laser sources and detectors. In particular we detail the requirements on the laser's pulse energy, duration and repetition rate, and on the receiver's temporal response and sensitivity. These are discussed in terms of the resulting implications for achievable range, resolution and measurement time while retaining eye-safety with this technique. Finally, the model is used to examine potential extensions to the experimental system that may allow for increased localisation of the position of the detected moving object, such as the inclusion of multiple detectors and/or multiple emitters.

  11. Single and multiple object tracking using log-euclidean Riemannian subspace and block-division appearance model.

    PubMed

    Hu, Weiming; Li, Xi; Luo, Wenhan; Zhang, Xiaoqin; Maybank, Stephen; Zhang, Zhongfei

    2012-12-01

    Object appearance modeling is crucial for tracking objects, especially in videos captured by nonstationary cameras and for reasoning about occlusions between multiple moving objects. Based on the log-euclidean Riemannian metric on symmetric positive definite matrices, we propose an incremental log-euclidean Riemannian subspace learning algorithm in which covariance matrices of image features are mapped into a vector space with the log-euclidean Riemannian metric. Based on the subspace learning algorithm, we develop a log-euclidean block-division appearance model which captures both the global and local spatial layout information about object appearances. Single object tracking and multi-object tracking with occlusion reasoning are then achieved by particle filtering-based Bayesian state inference. During tracking, incremental updating of the log-euclidean block-division appearance model captures changes in object appearance. For multi-object tracking, the appearance models of the objects can be updated even in the presence of occlusions. Experimental results demonstrate that the proposed tracking algorithm obtains more accurate results than six state-of-the-art tracking algorithms.

  12. To Pass or Not to Pass: Modeling the Movement and Affordance Dynamics of a Pick and Place Task

    PubMed Central

    Lamb, Maurice; Kallen, Rachel W.; Harrison, Steven J.; Di Bernardo, Mario; Minai, Ali; Richardson, Michael J.

    2017-01-01

    Humans commonly engage in tasks that require or are made more efficient by coordinating with other humans. In this paper we introduce a task dynamics approach for modeling multi-agent interaction and decision making in a pick and place task where an agent must move an object from one location to another and decide whether to act alone or with a partner. Our aims were to identify and model (1) the affordance related dynamics that define an actor's choice to move an object alone or to pass it to their co-actor and (2) the trajectory dynamics of an actor's hand movements when moving to grasp, relocate, or pass the object. Using a virtual reality pick and place task, we demonstrate that both the decision to pass or not pass an object and the movement trajectories of the participants can be characterized in terms of a behavioral dynamics model. Simulations suggest that the proposed behavioral dynamics model exhibits features observed in human participants including hysteresis in decision making, non-straight line trajectories, and non-constant velocity profiles. The proposed model highlights how the same low-dimensional behavioral dynamics can operate to constrain multiple (and often nested) levels of human activity and suggests that knowledge of what, when, where and how to move or act during pick and place behavior may be defined by these low dimensional task dynamics and, thus, can emerge spontaneously and in real-time with little a priori planning. PMID:28701975

  13. Visual discrimination in the pigeon (Columba livia): effects of selective lesions of the nucleus rotundus

    NASA Technical Reports Server (NTRS)

    Laverghetta, A. V.; Shimizu, T.

    1999-01-01

    The nucleus rotundus is a large thalamic nucleus in birds and plays a critical role in many visual discrimination tasks. In order to test the hypothesis that there are functionally distinct subdivisions in the nucleus rotundus, effects of selective lesions of the nucleus were studied in pigeons. The birds were trained to discriminate between different types of stationary objects and between different directions of moving objects. Multiple regression analyses revealed that lesions in the anterior, but not posterior, division caused deficits in discrimination of small stationary stimuli. Lesions in neither the anterior nor posterior divisions predicted effects in discrimination of moving stimuli. These results are consistent with a prediction led from the hypothesis that the nucleus is composed of functional subdivisions.

  14. Students’ understanding of forces: Force diagrams on horizontal and inclined plane

    NASA Astrophysics Data System (ADS)

    Sirait, J.; Hamdani; Mursyid, S.

    2018-03-01

    This study aims to analyse students’ difficulties in understanding force diagrams on horizontal surfaces and inclined planes. Physics education students (pre-service physics teachers) of Tanjungpura University, who had completed a Basic Physics course, took a Force concept test which has six questions covering three concepts: an object at rest, an object moving at constant speed, and an object moving at constant acceleration both on a horizontal surface and on an inclined plane. The test is in a multiple-choice format. It examines the ability of students to select appropriate force diagrams depending on the context. The results show that 44% of students have difficulties in solving the test (these students only could solve one or two items out of six items). About 50% of students faced difficulties finding the correct diagram of an object when it has constant speed and acceleration in both contexts. In general, students could only correctly identify 48% of the force diagrams on the test. The most difficult task for the students in terms was identifying the force diagram representing forces exerted on an object on in an inclined plane.

  15. Robust multiple cue fusion-based high-speed and nonrigid object tracking algorithm for short track speed skating

    NASA Astrophysics Data System (ADS)

    Liu, Chenguang; Cheng, Heng-Da; Zhang, Yingtao; Wang, Yuxuan; Xian, Min

    2016-01-01

    This paper presents a methodology for tracking multiple skaters in short track speed skating competitions. Nonrigid skaters move at high speed with severe occlusions happening frequently among them. The camera is panned quickly in order to capture the skaters in a large and dynamic scene. To automatically track the skaters and precisely output their trajectories becomes a challenging task in object tracking. We employ the global rink information to compensate camera motion and obtain the global spatial information of skaters, utilize random forest to fuse multiple cues and predict the blob of each skater, and finally apply a silhouette- and edge-based template-matching and blob-evolving method to labelling pixels to a skater. The effectiveness and robustness of the proposed method are verified through thorough experiments.

  16. A comparison of moving object detection methods for real-time moving object detection

    NASA Astrophysics Data System (ADS)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  17. Brain activation in response to randomized visual stimulation as obtained from conjunction and differential analysis: an fMRI study

    NASA Astrophysics Data System (ADS)

    Nasaruddin, N. H.; Yusoff, A. N.; Kaur, S.

    2014-11-01

    The objective of this multiple-subjects functional magnetic resonance imaging (fMRI) study was to identify the common brain areas that are activated when viewing black-and-white checkerboard pattern stimuli of various shapes, pattern and size and to investigate specific brain areas that are involved in processing static and moving visual stimuli. Sixteen participants viewed the moving (expanding ring, rotating wedge, flipping hour glass and bowtie and arc quadrant) and static (full checkerboard) stimuli during an fMRI scan. All stimuli have black-and-white checkerboard pattern. Statistical parametric mapping (SPM) was used in generating brain activation. Differential analyses were implemented to separately search for areas involved in processing static and moving stimuli. In general, the stimuli of various shapes, pattern and size activated multiple brain areas mostly in the left hemisphere. The activation in the right middle temporal gyrus (MTG) was found to be significantly higher in processing moving visual stimuli as compared to static stimulus. In contrast, the activation in the left calcarine sulcus and left lingual gyrus were significantly higher for static stimulus as compared to moving stimuli. Visual stimulation of various shapes, pattern and size used in this study indicated left lateralization of activation. The involvement of the right MTG in processing moving visual information was evident from differential analysis, while the left calcarine sulcus and left lingual gyrus are the areas that are involved in the processing of static visual stimulus.

  18. A mobile agent-based moving objects indexing algorithm in location based service

    NASA Astrophysics Data System (ADS)

    Fang, Zhixiang; Li, Qingquan; Xu, Hong

    2006-10-01

    This paper will extends the advantages of location based service, specifically using their ability to management and indexing the positions of moving object, Moreover with this objective in mind, a mobile agent-based moving objects indexing algorithm is proposed in this paper to efficiently process indexing request and acclimatize itself to limitation of location based service environment. The prominent feature of this structure is viewing moving object's behavior as the mobile agent's span, the unique mapping between the geographical position of moving objects and span point of mobile agent is built to maintain the close relationship of them, and is significant clue for mobile agent-based moving objects indexing to tracking moving objects.

  19. Moving Particles Through a Finite Element Mesh

    PubMed Central

    Peskin, Adele P.; Hardin, Gary R.

    1998-01-01

    We present a new numerical technique for modeling the flow around multiple objects moving in a fluid. The method tracks the dynamic interaction between each particle and the fluid. The movements of the fluid and the object are directly coupled. A background mesh is designed to fit the geometry of the overall domain. The mesh is designed independently of the presence of the particles except in terms of how fine it must be to track particles of a given size. Each particle is represented by a geometric figure that describes its boundary. This figure overlies the mesh. Nodes are added to the mesh where the particle boundaries intersect the background mesh, increasing the number of nodes contained in each element whose boundary is intersected. These additional nodes are then used to describe and track the particle in the numerical scheme. Appropriate element shape functions are defined to approximate the solution on the elements with extra nodes. The particles are moved through the mesh by moving only the overlying nodes defining the particles. The regular finite element grid remains unchanged. In this method, the mesh does not distort as the particles move. Instead, only the placement of particle-defining nodes changes as the particles move. Element shape functions are updated as the nodes move through the elements. This method is especially suited for models of moderate numbers of moderate-size particles, where the details of the fluid-particle coupling are important. Both the complications of creating finite element meshes around appreciable numbers of particles, and extensive remeshing upon movement of the particles are simplified in this method. PMID:28009377

  20. A Real-Time Method to Estimate Speed of Object Based on Object Detection and Optical Flow Calculation

    NASA Astrophysics Data System (ADS)

    Liu, Kaizhan; Ye, Yunming; Li, Xutao; Li, Yan

    2018-04-01

    In recent years Convolutional Neural Network (CNN) has been widely used in computer vision field and makes great progress in lots of contents like object detection and classification. Even so, combining Convolutional Neural Network, which means making multiple CNN frameworks working synchronously and sharing their output information, could figure out useful message that each of them cannot provide singly. Here we introduce a method to real-time estimate speed of object by combining two CNN: YOLOv2 and FlowNet. In every frame, YOLOv2 provides object size; object location and object type while FlowNet providing the optical flow of whole image. On one hand, object size and object location help to select out the object part of optical flow image thus calculating out the average optical flow of every object. On the other hand, object type and object size help to figure out the relationship between optical flow and true speed by means of optics theory and priori knowledge. Therefore, with these two key information, speed of object can be estimated. This method manages to estimate multiple objects at real-time speed by only using a normal camera even in moving status, whose error is acceptable in most application fields like manless driving or robot vision.

  1. A CFD study of complex missile and store configurations in relative motion

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1995-01-01

    An investigation was conducted from May 16, 1990 to August 31, 1994 on the development of computational fluid dynamics (CFD) methodologies for complex missiles and the store separation problem. These flowfields involved multiple-component configurations, where at least one of the objects was engaged in relative motion. The two most important issues that had to be addressed were: (1) the unsteadiness of the flowfields (time-accurate and efficient CFD algorithms for the unsteady equations), and (2) the generation of grid systems which would permit multiple and moving bodies in the computational domain (dynamic domain decomposition). The study produced two competing and promising methodologies, and their proof-of-concept cases, which have been reported in the open literature: (1) Unsteady solutions on dynamic, overlapped grids, which may also be perceived as moving, locally-structured grids, and (2) Unsteady solutions on dynamic, unstructured grids.

  2. A New Moving Object Detection Method Based on Frame-difference and Background Subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong

    2017-09-01

    Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.

  3. Transfer of Learning between Hemifields in Multiple Object Tracking: Memory Reduces Constraints of Attention

    PubMed Central

    Lapierre, Mark; Howe, Piers D. L.; Cropper, Simon J.

    2013-01-01

    Many tasks involve tracking multiple moving objects, or stimuli. Some require that individuals adapt to changing or unfamiliar conditions to be able to track well. This study explores processes involved in such adaptation through an investigation of the interaction of attention and memory during tracking. Previous research has shown that during tracking, attention operates independently to some degree in the left and right visual hemifields, due to putative anatomical constraints. It has been suggested that the degree of independence is related to the relative dominance of processes of attention versus processes of memory. Here we show that when individuals are trained to track a unique pattern of movement in one hemifield, that learning can be transferred to the opposite hemifield, without any evidence of hemifield independence. However, learning is not influenced by an explicit strategy of memorisation of brief periods of recognisable movement. The findings lend support to a role for implicit memory in overcoming putative anatomical constraints on the dynamic, distributed spatial allocation of attention involved in tracking multiple objects. PMID:24349555

  4. Tracker Toolkit

    NASA Technical Reports Server (NTRS)

    Lewis, Steven J.; Palacios, David M.

    2013-01-01

    This software can track multiple moving objects within a video stream simultaneously, use visual features to aid in the tracking, and initiate tracks based on object detection in a subregion. A simple programmatic interface allows plugging into larger image chain modeling suites. It extracts unique visual features for aid in tracking and later analysis, and includes sub-functionality for extracting visual features about an object identified within an image frame. Tracker Toolkit utilizes a feature extraction algorithm to tag each object with metadata features about its size, shape, color, and movement. Its functionality is independent of the scale of objects within a scene. The only assumption made on the tracked objects is that they move. There are no constraints on size within the scene, shape, or type of movement. The Tracker Toolkit is also capable of following an arbitrary number of objects in the same scene, identifying and propagating the track of each object from frame to frame. Target objects may be specified for tracking beforehand, or may be dynamically discovered within a tripwire region. Initialization of the Tracker Toolkit algorithm includes two steps: Initializing the data structures for tracked target objects, including targets preselected for tracking; and initializing the tripwire region. If no tripwire region is desired, this step is skipped. The tripwire region is an area within the frames that is always checked for new objects, and all new objects discovered within the region will be tracked until lost (by leaving the frame, stopping, or blending in to the background).

  5. Position Affects Performance in Multiple-Object Tracking in Rugby Union Players

    PubMed Central

    Martín, Andrés; Sfer, Ana M.; D'Urso Villar, Marcela A.; Barraza, José F.

    2017-01-01

    We report an experiment that examines the performance of rugby union players and a control group composed of graduate student with no sport experience, in a multiple-object tracking task. It compares the ability of 86 high level rugby union players grouped as Backs and Forwards and the control group, to track a subset of randomly moving targets amongst the same number of distractors. Several difficulties were included in the experimental design in order to evaluate possible interactions between the relevant variables. Results show that the performance of the Backs is better than that of the other groups, but the occurrence of interactions precludes an isolated groups analysis. We interpret the results within the framework of visual attention and discuss both, the implications of our results and the practical consequences. PMID:28951725

  6. Short memory fuzzy fusion image recognition schema employing spatial and Fourier descriptors

    NASA Astrophysics Data System (ADS)

    Raptis, Sotiris N.; Tzafestas, Spyros G.

    2001-03-01

    Single images quite often do not bear enough information for precise interpretation due to a variety of reasons. Multiple image fusion and adequate integration recently became the state of the art in the pattern recognition field. In this paper presented here and enhanced multiple observation schema is discussed investigating improvements to the baseline fuzzy- probabilistic image fusion methodology. The first innovation introduced consists in considering only a limited but seemingly ore effective part of the uncertainty information obtained by a certain time restricting older uncertainty dependencies and alleviating computational burden that is now needed for short sequence (stored into memory) of samples. The second innovation essentially grouping them into feature-blind object hypotheses. Experiment settings include a sequence of independent views obtained by camera being moved around the investigated object.

  7. Neural basis for dynamic updating of object representation in visual working memory.

    PubMed

    Takahama, Sachiko; Miyauchi, Satoru; Saiki, Jun

    2010-02-15

    In real world, objects have multiple features and change dynamically. Thus, object representations must satisfy dynamic updating and feature binding. Previous studies have investigated the neural activity of dynamic updating or feature binding alone, but not both simultaneously. We investigated the neural basis of feature-bound object representation in a dynamically updating situation by conducting a multiple object permanence tracking task, which required observers to simultaneously process both the maintenance and dynamic updating of feature-bound objects. Using an event-related design, we separated activities during memory maintenance and change detection. In the search for regions showing selective activation in dynamic updating of feature-bound objects, we identified a network during memory maintenance that was comprised of the inferior precentral sulcus, superior parietal lobule, and middle frontal gyrus. In the change detection period, various prefrontal regions, including the anterior prefrontal cortex, were activated. In updating object representation of dynamically moving objects, the inferior precentral sulcus closely cooperates with a so-called "frontoparietal network", and subregions of the frontoparietal network can be decomposed into those sensitive to spatial updating and feature binding. The anterior prefrontal cortex identifies changes in object representation by comparing memory and perceptual representations rather than maintaining object representations per se, as previously suggested. Copyright 2009 Elsevier Inc. All rights reserved.

  8. Putting Essential Understanding of Ratios and Proportions into Practice in Grades 6-8

    ERIC Educational Resources Information Center

    Olson, Travis A.; Olson, Melfried; Slovin, Hannah

    2015-01-01

    Do your students think they can model ratios with sets of discrete objects and combine them to show the addition of ratios? Do they believe that equivalent ratios are based on additive relationships rather than multiplicative ones? What tasks can you offer what questions can you ask to determine what your students know or don't know and move them…

  9. Tracking moving identities: after attending the right location, the identity does not come for free.

    PubMed

    Pinto, Yaïr; Scholte, H Steven; Lamme, V A F

    2012-01-01

    Although tracking identical moving objects has been studied since the 1980's, only recently the study into tracking moving objects with distinct identities has started (referred to as Multiple Identity Tracking, MIT). So far, only behavioral studies into MIT have been undertaken. These studies have left a fundamental question regarding MIT unanswered, is MIT a one-stage or a two-stage process? According to the one-stage model, after a location has been attended, the identity is released without effort. However, according to the two-stage model, there are two effortful stages in MIT, attending to a location, and attending to the identity of the object at that location. In the current study we investigated this question by measuring brain activity in response to tracking familiar and unfamiliar targets. Familiarity is known to automate effortful processes, so if attention to identify the object is needed, this should become easier. However, if no such attention is needed, familiarity can only affect other processes (such as memory for the target set). Our results revealed that on unfamiliar trials neural activity was higher in both attentional networks, and visual identification networks. These results suggest that familiarity in MIT automates attentional identification processes, thus suggesting that attentional identification is needed in MIT. This then would imply that MIT is essentially a two-stage process, since after attending the location, the identity does not seem to come for free.

  10. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  11. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  12. Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  13. Radiation detector having a multiplicity of individual detecting elements

    DOEpatents

    Whetten, Nathan R.; Kelley, John E.

    1985-01-01

    A radiation detector has a plurality of detector collection element arrays immersed in a radiation-to-electron conversion medium. Each array contains a multiplicity of coplanar detector elements radially disposed with respect to one of a plurality of positions which at least one radiation source can assume. Each detector collector array is utilized only when a source is operative at the associated source position, negating the necessity for a multi-element detector to be moved with respect to an object to be examined. A novel housing provides the required containment of a high-pressure gas conversion medium.

  14. Multimodal control of sensors on multiple simulated unmanned vehicles.

    PubMed

    Baber, C; Morin, C; Parekh, M; Cahillane, M; Houghton, R J

    2011-09-01

    The use of multimodal (speech plus manual) control of the sensors on combinations of one, two, three or five simulated unmanned vehicles (UVs) is explored. Novice controllers of simulated UVs complete a series of target checking tasks. Two experiments compare speech and gamepad control for one, two, three or five UVs in a simulated environment. Increasing the number of UVs has an impact on subjective rating of workload (measured by NASA-Task Load Index), particularly when moving from one to three UVs. Objective measures of performance showed that the participants tended to issue fewer commands as the number of vehicles increased (when using the gamepad control), but, while performance with a single UV was superior to that of multiple UVs, there was little difference across two, three or five UVs. Participants with low spatial ability (measured by the Object Perspectives Test) showed an increase in time to respond to warnings when controlling five UVs. Combining speech with gamepad control of sensors on UVs leads to superior performance on a secondary (respond-to-warnings) task (implying a reduction in demand) and use of fewer commands on primary (move-sensors and classify-target) tasks (implying more efficient operation). STATEMENT OF RELEVANCE: Benefits of multimodal control for unmanned vehicles are demonstrated. When controlling sensors on multiple UVs, participants with low spatial orientation scores have problems. It is proposed that the findings of these studies have implications for selection of UV operators and suggests that future UV workstations could benefit from multimodal control.

  15. Laser Prevention of Earth Impact Disasters

    NASA Technical Reports Server (NTRS)

    Campbell, J.; Smalley, L.; Boccio, D.; Howell, Joe T. (Technical Monitor)

    2002-01-01

    We now believe that while there are about 2000 earth orbit crossing rocks greater than 1 kilometer in diameter, there may be as many as 100,000 or more objects in the 100m size range. Can anything be done about this fundamental existence question facing us? The answer is a resounding yes! We have the technology to prevent collisions. By using an intelligent combination of Earth and space based sensors coupled with an infrastructure of high-energy laser stations and other secondary mitigation options, we can deflect inbound asteroids, meteoroids, and comets and prevent them from striking the Earth. This can be accomplished by irradiating the surface of an inbound rock with sufficiently intense pulses so that ablation occurs. This ablation acts as a small rocket incrementally changing the shape of the rock's orbit around the Sun. One-kilometer size rocks can be moved sufficiently in a month while smaller rocks may be moved in a shorter time span.We recommend that the World's space objectives be immediately reprioritized to start us moving quickly towards a multiple option defense capability. While lasers should be the primary approach, all mitigation options depend on robust early warning, detection, and tracking resources to find objects sufficiently prior to Earth orbit passage in time to allow mitigation. Infrastructure options should include ground, LEO, GEO, Lunar, and libration point laser and sensor stations for providing early warning, tracking, and deflection. Other options should include space interceptors that will carry both laser and nuclear ablators for close range work. Response options must be developed to deal with the consequences of an impact should we move too slowly.

  16. Multiple degree-of-freedom mechanical interface to a computer system

    DOEpatents

    Rosenberg, Louis B.

    2001-01-01

    A method and apparatus for providing high bandwidth and low noise mechanical input and output for computer systems. A gimbal mechanism provides two revolute degrees of freedom to an object about two axes of rotation. A linear axis member is coupled to the gimbal mechanism at the intersection of the two axes of rotation. The linear axis member is capable of being translated along a third axis to provide a third degree of freedom. The user object is coupled to the linear axis member and is thus translatable along the third axis so that the object can be moved along all three degrees of freedom. Transducers associated with the provided degrees of freedom include sensors and actuators and provide an electromechanical interface between the object and a digital processing system. Capstan drive mechanisms transmit forces between the transducers and the object. The linear axis member can also be rotated about its lengthwise axis to provide a fourth degree of freedom, and, optionally, a floating gimbal mechanism is coupled to the linear axis member to provide fifth and sixth degrees of freedom to an object. Transducer sensors are associated with the fourth, fifth, and sixth degrees of freedom. The interface is well suited for simulations of medical procedures and simulations in which an object such as a stylus or a joystick is moved and manipulated by the user.

  17. Effects of Implied Motion and Facing Direction on Positional Preferences in Single-Object Pictures.

    PubMed

    Palmer, Stephen E; Langlois, Thomas A

    2017-07-01

    Palmer, Gardner, and Wickens studied aesthetic preferences for pictures of single objects and found a strong inward bias: Right-facing objects were preferred left-of-center and left-facing objects right-of-center. They found no effect of object motion (people and cars showed the same inward bias as chairs and teapots), but the objects were not depicted as moving. Here we measured analogous inward biases with objects depicted as moving with an implied direction and speed by having participants drag-and-drop target objects into the most aesthetically pleasing position. In Experiment 1, human figures were shown diving or falling while moving forward or backward. Aesthetic biases were evident for both inward-facing and inward-moving figures, but the motion-based bias dominated so strongly that backward divers or fallers were preferred moving inward but facing outward. Experiment 2 investigated implied speed effects using images of humans, horses, and cars moving at different speeds (e.g., standing, walking, trotting, and galloping horses). Inward motion or facing biases were again present, and differences in their magnitude due to speed were evident. Unexpectedly, faster moving objects were generally preferred closer to frame center than slower moving objects. These results are discussed in terms of the combined effects of prospective, future-oriented biases, and retrospective, past-oriented biases.

  18. Comparative study on collaborative interaction in non-immersive and immersive systems

    NASA Astrophysics Data System (ADS)

    Shahab, Qonita M.; Kwon, Yong-Moo; Ko, Heedong; Mayangsari, Maria N.; Yamasaki, Shoko; Nishino, Hiroaki

    2007-09-01

    This research studies the Virtual Reality simulation for collaborative interaction so that different people from different places can interact with one object concurrently. Our focus is the real-time handling of inputs from multiple users, where object's behavior is determined by the combination of the multiple inputs. Issues addressed in this research are: 1) The effects of using haptics on a collaborative interaction, 2) The possibilities of collaboration between users from different environments. We conducted user tests on our system in several cases: 1) Comparison between non-haptics and haptics collaborative interaction over LAN, 2) Comparison between non-haptics and haptics collaborative interaction over Internet, and 3) Analysis of collaborative interaction between non-immersive and immersive display environments. The case studies are the interaction of users in two cases: collaborative authoring of a 3D model by two users, and collaborative haptic interaction by multiple users. In Virtual Dollhouse, users can observe physics law while constructing a dollhouse using existing building blocks, under gravity effects. In Virtual Stretcher, multiple users can collaborate on moving a stretcher together while feeling each other's haptic motions.

  19. An integrated framework for detecting suspicious behaviors in video surveillance

    NASA Astrophysics Data System (ADS)

    Zin, Thi Thi; Tin, Pyke; Hama, Hiromitsu; Toriu, Takashi

    2014-03-01

    In this paper, we propose an integrated framework for detecting suspicious behaviors in video surveillance systems which are established in public places such as railway stations, airports, shopping malls and etc. Especially, people loitering in suspicion, unattended objects left behind and exchanging suspicious objects between persons are common security concerns in airports and other transit scenarios. These involve understanding scene/event, analyzing human movements, recognizing controllable objects, and observing the effect of the human movement on those objects. In the proposed framework, multiple background modeling technique, high level motion feature extraction method and embedded Markov chain models are integrated for detecting suspicious behaviors in real time video surveillance systems. Specifically, the proposed framework employs probability based multiple backgrounds modeling technique to detect moving objects. Then the velocity and distance measures are computed as the high level motion features of the interests. By using an integration of the computed features and the first passage time probabilities of the embedded Markov chain, the suspicious behaviors in video surveillance are analyzed for detecting loitering persons, objects left behind and human interactions such as fighting. The proposed framework has been tested by using standard public datasets and our own video surveillance scenarios.

  20. Robot training of upper limb in multiple sclerosis: comparing protocols with or without manipulative task components.

    PubMed

    Carpinella, Ilaria; Cattaneo, Davide; Bertoni, Rita; Ferrarin, Maurizio

    2012-05-01

    In this pilot study, we compared two protocols for robot-based rehabilitation of upper limb in multiple sclerosis (MS): a protocol involving reaching tasks (RT) requiring arm transport only and a protocol requiring both objects' reaching and manipulation (RMT). Twenty-two MS subjects were assigned to RT or RMT group. Both protocols consisted of eight sessions. During RT training, subjects moved the handle of a planar robotic manipulandum toward circular targets displayed on a screen. RMT protocol required patients to reach and manipulate real objects, by moving the robotic arm equipped with a handle which left the hand free for distal tasks. In both trainings, the robot generated resistive and perturbing forces. Subjects were evaluated with clinical and instrumental tests. The results confirmed that MS patients maintained the ability to adapt to the robot-generated forces and that the rate of motor learning increased across sessions. Robot-therapy significantly reduced arm tremor and improved arm kinematics and functional ability. Compared to RT, RMT protocol induced a significantly larger improvement in movements involving grasp (improvement in Grasp ARAT sub-score: RMT 77.4%, RT 29.5%, p=0.035) but not precision grip. Future studies are needed to evaluate if longer trainings and the use of robotic handles would significantly improve also fine manipulation.

  1. An open source framework for tracking and state estimation ('Stone Soup')

    NASA Astrophysics Data System (ADS)

    Thomas, Paul A.; Barr, Jordi; Balaji, Bhashyam; White, Kruger

    2017-05-01

    The ability to detect and unambiguously follow all moving entities in a state-space is important in multiple domains both in defence (e.g. air surveillance, maritime situational awareness, ground moving target indication) and the civil sphere (e.g. astronomy, biology, epidemiology, dispersion modelling). However, tracking and state estimation researchers and practitioners have difficulties recreating state-of-the-art algorithms in order to benchmark their own work. Furthermore, system developers need to assess which algorithms meet operational requirements objectively and exhaustively rather than intuitively or driven by personal favourites. We have therefore commenced the development of a collaborative initiative to create an open source framework for production, demonstration and evaluation of Tracking and State Estimation algorithms. The initiative will develop a (MIT-licensed) software platform for researchers and practitioners to test, verify and benchmark a variety of multi-sensor and multi-object state estimation algorithms. The initiative is supported by four defence laboratories, who will contribute to the development effort for the framework. The tracking and state estimation community will derive significant benefits from this work, including: access to repositories of verified and validated tracking and state estimation algorithms, a framework for the evaluation of multiple algorithms, standardisation of interfaces and access to challenging data sets. Keywords: Tracking,

  2. A sparse representation-based approach for copy-move image forgery detection in smooth regions

    NASA Astrophysics Data System (ADS)

    Abdessamad, Jalila; ElAdel, Asma; Zaied, Mourad

    2017-03-01

    Copy-move image forgery is the act of cloning a restricted region in the image and pasting it once or multiple times within that same image. This procedure intends to cover a certain feature, probably a person or an object, in the processed image or emphasize it through duplication. Consequences of this malicious operation can be unexpectedly harmful. Hence, the present paper proposes a new approach that automatically detects Copy-move Forgery (CMF). In particular, this work broaches a widely common open issue in CMF research literature that is detecting CMF within smooth areas. Indeed, the proposed approach represents the image blocks as a sparse linear combination of pre-learned bases (a mixture of texture and color-wise small patches) which allows a robust description of smooth patches. The reported experimental results demonstrate the effectiveness of the proposed approach in identifying the forged regions in CM attacks.

  3. Going, Going, Gone: Localizing Abrupt Offsets of Moving Objects

    ERIC Educational Resources Information Center

    Maus, Gerrit W.; Nijhawan, Romi

    2009-01-01

    When a moving object abruptly disappears, this profoundly influences its localization by the visual system. In Experiment 1, 2 aligned objects moved across the screen, and 1 of them abruptly disappeared. Observers reported seeing the objects misaligned at the time of the offset, with the continuing object leading. Experiment 2 showed that the…

  4. Tracking Algorithm of Multiple Pedestrians Based on Particle Filters in Video Sequences

    PubMed Central

    Liu, Yun; Wang, Chuanxu; Zhang, Shujun; Cui, Xuehong

    2016-01-01

    Pedestrian tracking is a critical problem in the field of computer vision. Particle filters have been proven to be very useful in pedestrian tracking for nonlinear and non-Gaussian estimation problems. However, pedestrian tracking in complex environment is still facing many problems due to changes of pedestrian postures and scale, moving background, mutual occlusion, and presence of pedestrian. To surmount these difficulties, this paper presents tracking algorithm of multiple pedestrians based on particle filters in video sequences. The algorithm acquires confidence value of the object and the background through extracting a priori knowledge thus to achieve multipedestrian detection; it adopts color and texture features into particle filter to get better observation results and then automatically adjusts weight value of each feature according to current tracking environment. During the process of tracking, the algorithm processes severe occlusion condition to prevent drift and loss phenomena caused by object occlusion and associates detection results with particle state to propose discriminated method for object disappearance and emergence thus to achieve robust tracking of multiple pedestrians. Experimental verification and analysis in video sequences demonstrate that proposed algorithm improves the tracking performance and has better tracking results. PMID:27847514

  5. Mining moving object trajectories in location-based services for spatio-temporal database update

    NASA Astrophysics Data System (ADS)

    Guo, Danhuai; Cui, Weihong

    2008-10-01

    Advances in wireless transmission and mobile technology applied to LBS (Location-based Services) flood us with amounts of moving objects data. Vast amounts of gathered data from position sensors of mobile phones, PDAs, or vehicles hide interesting and valuable knowledge and describe the behavior of moving objects. The correlation between temporal moving patterns of moving objects and geo-feature spatio-temporal attribute was ignored, and the value of spatio-temporal trajectory data was not fully exploited too. Urban expanding or frequent town plan change bring about a large amount of outdated or imprecise data in spatial database of LBS, and they cannot be updated timely and efficiently by manual processing. In this paper we introduce a data mining approach to movement pattern extraction of moving objects, build a model to describe the relationship between movement patterns of LBS mobile objects and their environment, and put up with a spatio-temporal database update strategy in LBS database based on trajectories spatiotemporal mining. Experimental evaluation reveals excellent performance of the proposed model and strategy. Our original contribution include formulation of model of interaction between trajectory and its environment, design of spatio-temporal database update strategy based on moving objects data mining, and the experimental application of spatio-temporal database update by mining moving objects trajectories.

  6. Detection and Tracking of Moving Objects with Real-Time Onboard Vision System

    NASA Astrophysics Data System (ADS)

    Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.

    2017-05-01

    Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.

  7. A-Track: Detecting Moving Objects in FITS images

    NASA Astrophysics Data System (ADS)

    Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.

    2017-04-01

    A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.

  8. JPRS report: Science and technology. Central Eurasia

    NASA Astrophysics Data System (ADS)

    1994-05-01

    Translated articles cover the following topics: optimal systems to detect and classify moving objects; multiple identification of optical readings in multisensor information and measurement system; method of first integrals in synthesis of optimal control; study of the development of turbulence in the region of a break above a triangular wing; electroerosion machining in aviation engine construction; and cumulation of a flat shock wave in a tube by a thin parietal gas layer of lower density.

  9. Moving object detection using dynamic motion modelling from UAV aerial images.

    PubMed

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2014-01-01

    Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.

  10. Technical Report of the Use of a Novel Eye Tracking System to Measure Impairment Associated with Mild Traumatic Brain Injury

    PubMed Central

    2017-01-01

    This technical report details the results of an uncontrolled study of EyeGuide Focus, a 10-second concussion management tool which relies on eye tracking to determine the potential impairment of visual attention, an indicator often of mild traumatic brain injury (mTBI). Essentially, people who can visually keep steady and accurate attention on a moving object in their environment likely suffer from no impairment. However, if after a potential mTBI event, subjects cannot keep attention on a moving object in a normal way as demonstrated on their previous healthy baseline tests. This may indicate possible neurological impairment. Now deployed at multiple locations across the United States, Focus (EyeGuide, Lubbock, Texas, United States) to date, has recorded more than 4,000 test scores. Our data analysis of these results shows the promise of Focus as a low-cost, ocular-based impairment test for assessing potential neurological impairment caused by mTBI in subjects ages eight and older.  PMID:28630809

  11. Technical Report of the Use of a Novel Eye Tracking System to Measure Impairment Associated with Mild Traumatic Brain Injury.

    PubMed

    Kelly, Michael

    2017-05-15

    This technical report details the results of an uncontrolled study of EyeGuide Focus, a 10-second concussion management tool which relies on eye tracking to determine the potential impairment of visual attention, an indicator often of mild traumatic brain injury (mTBI). Essentially, people who can visually keep steady and accurate attention on a moving object in their environment likely suffer from no impairment. However, if after a potential mTBI event, subjects cannot keep attention on a moving object in a normal way as demonstrated on their previous healthy baseline tests. This may indicate possible neurological impairment. Now deployed at multiple locations across the United States, Focus (EyeGuide, Lubbock, Texas, United States) to date, has recorded more than 4,000 test scores. Our data analysis of these results shows the promise of Focus as a low-cost, ocular-based impairment test for assessing potential neurological impairment caused by mTBI in subjects ages eight and older.

  12. Direction information in multiple object tracking is limited by a graded resource.

    PubMed

    Horowitz, Todd S; Cohen, Michael A

    2010-10-01

    Is multiple object tracking (MOT) limited by a fixed set of structures (slots), a limited but divisible resource, or both? Here, we answer this question by measuring the precision of the direction representation for tracked targets. The signature of a limited resource is a decrease in precision as the square root of the tracking load. The signature of fixed slots is a fixed precision. Hybrid models predict a rapid decrease to asymptotic precision. In two experiments, observers tracked moving disks and reported target motion direction by adjusting a probe arrow. We derived the precision of representation of correctly tracked targets using a mixture distribution analysis. Precision declined with target load according to the square-root law up to six targets. This finding is inconsistent with both pure and hybrid slot models. Instead, directional information in MOT appears to be limited by a continuously divisible resource.

  13. A system for learning statistical motion patterns.

    PubMed

    Hu, Weiming; Xiao, Xuejuan; Fu, Zhouyu; Xie, Dan; Tan, Tieniu; Maybank, Steve

    2006-09-01

    Analysis of motion patterns is an effective approach for anomaly detection and behavior prediction. Current approaches for the analysis of motion patterns depend on known scenes, where objects move in predefined ways. It is highly desirable to automatically construct object motion patterns which reflect the knowledge of the scene. In this paper, we present a system for automatically learning motion patterns for anomaly detection and behavior prediction based on a proposed algorithm for robustly tracking multiple objects. In the tracking algorithm, foreground pixels are clustered using a fast accurate fuzzy K-means algorithm. Growing and prediction of the cluster centroids of foreground pixels ensure that each cluster centroid is associated with a moving object in the scene. In the algorithm for learning motion patterns, trajectories are clustered hierarchically using spatial and temporal information and then each motion pattern is represented with a chain of Gaussian distributions. Based on the learned statistical motion patterns, statistical methods are used to detect anomalies and predict behaviors. Our system is tested using image sequences acquired, respectively, from a crowded real traffic scene and a model traffic scene. Experimental results show the robustness of the tracking algorithm, the efficiency of the algorithm for learning motion patterns, and the encouraging performance of algorithms for anomaly detection and behavior prediction.

  14. Sustained attention to objects' motion sharpens position representations: Attention to changing position and attention to motion are distinct.

    PubMed

    Howard, Christina J; Rollings, Victoria; Hardie, Amy

    2017-06-01

    In tasks where people monitor moving objects, such the multiple object tracking task (MOT), observers attempt to keep track of targets as they move amongst distracters. The literature is mixed as to whether observers make use of motion information to facilitate performance. We sought to address this by two means: first by superimposing arrows on objects which varied in their informativeness about motion direction and second by asking observers to attend to motion direction. Using a position monitoring task, we calculated mean error magnitudes as a measure of the precision with which target positions are represented. We also calculated perceptual lags versus extrapolated reports, which are the times at which positions of targets best match position reports. We find that the presence of motion information in the form of superimposed arrows made no difference to position report precision nor perceptual lag. However, when we explicitly instructed observers to attend to motion, we saw facilitatory effects on position reports and in some cases reports that best matched extrapolated rather than lagging positions for small set sizes. The results indicate that attention to changing positions does not automatically recruit attention to motion, showing a dissociation between sustained attention to changing positions and attention to motion. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object

    PubMed Central

    Dokka, Kalpana; DeAngelis, Gregory C.

    2015-01-01

    Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion. PMID:26446214

  16. Non-iterative Aberration Correction of a Multiple Transmitter System

    DTIC Science & Technology

    2011-09-01

    corresponds to the object being moved closer, the third and fourth rows are again at best focus with astigmatism added by rotating a pair of cylindrical...rotation within a matched pair of cylindrical lenses. While the data collect for Fig. 7 was designed to isolate defocus (a) and astigmatism (b) there...was always some combination of both present, and the algorithm is always solving for both defocus and astigmatism . This is evident from the best

  17. Visual Prediction Error Spreads Across Object Features in Human Visual Cortex

    PubMed Central

    Summerfield, Christopher; Egner, Tobias

    2016-01-01

    Visual cognition is thought to rely heavily on contextual expectations. Accordingly, previous studies have revealed distinct neural signatures for expected versus unexpected stimuli in visual cortex. However, it is presently unknown how the brain combines multiple concurrent stimulus expectations such as those we have for different features of a familiar object. To understand how an unexpected object feature affects the simultaneous processing of other expected feature(s), we combined human fMRI with a task that independently manipulated expectations for color and motion features of moving-dot stimuli. Behavioral data and neural signals from visual cortex were then interrogated to adjudicate between three possible ways in which prediction error (surprise) in the processing of one feature might affect the concurrent processing of another, expected feature: (1) feature processing may be independent; (2) surprise might “spread” from the unexpected to the expected feature, rendering the entire object unexpected; or (3) pairing a surprising feature with an expected feature might promote the inference that the two features are not in fact part of the same object. To formalize these rival hypotheses, we implemented them in a simple computational model of multifeature expectations. Across a range of analyses, behavior and visual neural signals consistently supported a model that assumes a mixing of prediction error signals across features: surprise in one object feature spreads to its other feature(s), thus rendering the entire object unexpected. These results reveal neurocomputational principles of multifeature expectations and indicate that objects are the unit of selection for predictive vision. SIGNIFICANCE STATEMENT We address a key question in predictive visual cognition: how does the brain combine multiple concurrent expectations for different features of a single object such as its color and motion trajectory? By combining a behavioral protocol that independently varies expectation of (and attention to) multiple object features with computational modeling and fMRI, we demonstrate that behavior and fMRI activity patterns in visual cortex are best accounted for by a model in which prediction error in one object feature spreads to other object features. These results demonstrate how predictive vision forms object-level expectations out of multiple independent features. PMID:27810936

  18. Objects in Motion

    ERIC Educational Resources Information Center

    Damonte, Kathleen

    2004-01-01

    One thing scientists study is how objects move. A famous scientist named Sir Isaac Newton (1642-1727) spent a lot of time observing objects in motion and came up with three laws that describe how things move. This explanation only deals with the first of his three laws of motion. Newton's First Law of Motion says that moving objects will continue…

  19. If you watch it move, you'll recognize it in 3D: Transfer of depth cues between encoding and retrieval.

    PubMed

    Papenmeier, Frank; Schwan, Stephan

    2016-02-01

    Viewing objects with stereoscopic displays provides additional depth cues through binocular disparity supporting object recognition. So far, it was unknown whether this results from the representation of specific stereoscopic information in memory or a more general representation of an object's depth structure. Therefore, we investigated whether continuous object rotation acting as depth cue during encoding results in a memory representation that can subsequently be accessed by stereoscopic information during retrieval. In Experiment 1, we found such transfer effects from continuous object rotation during encoding to stereoscopic presentations during retrieval. In Experiments 2a and 2b, we found that the continuity of object rotation is important because only continuous rotation and/or stereoscopic depth but not multiple static snapshots presented without stereoscopic information caused the extraction of an object's depth structure into memory. We conclude that an object's depth structure and not specific depth cues are represented in memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Mapping Land and Water Surface Topography with instantaneous Structure from Motion

    NASA Astrophysics Data System (ADS)

    Dietrich, J.; Fonstad, M. A.

    2012-12-01

    Structure from Motion (SfM) has given researchers an invaluable tool for low-cost, high-resolution 3D mapping of the environment. These SfM 3D surface models are commonly constructed from many digital photographs collected with one digital camera (either handheld or attached to aerial platform). This method works for stationary or very slow moving objects. However, objects in motion are impossible to capture with one-camera SfM. With multiple simultaneously triggered cameras, it becomes possible to capture multiple photographs at the same time which allows for the construction 3D surface models of moving objects and surfaces, an instantaneous SfM (ISfM) surface model. In river science, ISfM provides a low-cost solution for measuring a number of river variables that researchers normally estimate or are unable to collect over large areas. With ISfM and sufficient coverage of the banks and RTK-GPS control it is possible to create a digital surface model of land and water surface elevations across an entire channel and water surface slopes at any point within the surface model. By setting the cameras to collect time-lapse photography of a scene it is possible to create multiple surfaces that can be compared using traditional digital surface model differencing. These water surface models could be combined the high-resolution bathymetry to create fully 3D cross sections that could be useful in hydrologic modeling. Multiple temporal image sets could also be used in 2D or 3D particle image velocimetry to create 3D surface velocity maps of a channel. Other applications in earth science include anything where researchers could benefit from temporal surface modeling like mass movements, lava flows, dam removal monitoring. The camera system that was used for this research consisted of ten pocket digital cameras (Canon A3300) equipped with wireless triggers. The triggers were constructed with an Arduino-style microcontroller and off-the-shelf handheld radios with a maximum range of several kilometers. The cameras are controlled from another microcontroller/radio combination that allows for manual or automatic triggering of the cameras. The total cost of the camera system was approximately 1500 USD.

  1. Parallel architecture for rapid image generation and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nerheim, R.J.

    1987-01-01

    A multiprocessor architecture inspired by the Disney multiplane camera is proposed. For many applications, this approach produces a natural mapping of processors to objects in a scene. Such a mapping promotes parallelism and reduces the hidden-surface work with minimal interprocessor communication and low-overhead cost. Existing graphics architectures store the final picture as a monolithic entity. The architecture here stores each object's image separately. It assembles the final composite picture from component images only when the video display needs to be refreshed. This organization simplifies the work required to animate moving objects that occlude other objects. In addition, the architecture hasmore » multiple processors that generate the component images in parallel. This further shortens the time needed to create a composite picture. In addition to generating images for animation, the architecture has the ability to decompose images.« less

  2. Analysis to feature-based video stabilization/registration techniques within application of traffic data collection

    NASA Astrophysics Data System (ADS)

    Sadat, Mojtaba T.; Viti, Francesco

    2015-02-01

    Machine vision is rapidly gaining popularity in the field of Intelligent Transportation Systems. In particular, advantages are foreseen by the exploitation of Aerial Vehicles (AV) in delivering a superior view on traffic phenomena. However, vibration on AVs makes it difficult to extract moving objects on the ground. To partly overcome this issue, image stabilization/registration procedures are adopted to correct and stitch multiple frames taken of the same scene but from different positions, angles, or sensors. In this study, we examine the impact of multiple feature-based techniques for stabilization, and we show that SURF detector outperforms the others in terms of time efficiency and output similarity.

  3. An automated data exploitation system for airborne sensors

    NASA Astrophysics Data System (ADS)

    Chen, Hai-Wen; McGurr, Mike

    2014-06-01

    Advanced wide area persistent surveillance (WAPS) sensor systems on manned or unmanned airborne vehicles are essential for wide-area urban security monitoring in order to protect our people and our warfighter from terrorist attacks. Currently, human (imagery) analysts process huge data collections from full motion video (FMV) for data exploitation and analysis (real-time and forensic), providing slow and inaccurate results. An Automated Data Exploitation System (ADES) is urgently needed. In this paper, we present a recently developed ADES for airborne vehicles under heavy urban background clutter conditions. This system includes four processes: (1) fast image registration, stabilization, and mosaicking; (2) advanced non-linear morphological moving target detection; (3) robust multiple target (vehicles, dismounts, and human) tracking (up to 100 target tracks); and (4) moving or static target/object recognition (super-resolution). Test results with real FMV data indicate that our ADES can reliably detect, track, and recognize multiple vehicles under heavy urban background clutters. Furthermore, our example shows that ADES as a baseline platform can provide capability for vehicle abnormal behavior detection to help imagery analysts quickly trace down potential threats and crimes.

  4. The Pop out of Scene-Relative Object Movement against Retinal Motion Due to Self-Movement

    ERIC Educational Resources Information Center

    Rushton, Simon K.; Bradshaw, Mark F.; Warren, Paul A.

    2007-01-01

    An object that moves is spotted almost effortlessly; it "pops out." When the observer is stationary, a moving object is uniquely identified by retinal motion. This is not so when the observer is also moving; as the eye travels through space all scene objects change position relative to the eye producing a complicated field of retinal motion.…

  5. Mach Cones in a Coulomb Lattice and a Dusty Plasma

    NASA Astrophysics Data System (ADS)

    Samsonov, D.; Goree, J.; Ma, Z. W.; Bhattacharjee, A.; Thomas, H. M.; Morfill, G. E.

    1999-11-01

    Mach cones, or V-shaped disturbances created by supersonic objects, have been detected in a two-dimensional Coulomb crystal. Electrically charged microspheres levitated in a glow-discharge plasma formed a dusty plasma, with particles arranged in a hexagonal lattice in a horizontal plane. Beneath this lattice plane, a sphere moved faster than the lattice sound speed. Mach cones were double, first compressive then rarefactive, due to the strongly coupled crystalline state. Molecular dynamics simulations using a Yukawa potential also show multiple Mach cones.

  6. Multiple attenuation to reflection seismic data using Radon filter and Wave Equation Multiple Rejection (WEMR) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erlangga, Mokhammad Puput

    Separation between signal and noise, incoherent or coherent, is important in seismic data processing. Although we have processed the seismic data, the coherent noise is still mixing with the primary signal. Multiple reflections are a kind of coherent noise. In this research, we processed seismic data to attenuate multiple reflections in the both synthetic and real seismic data of Mentawai. There are several methods to attenuate multiple reflection, one of them is Radon filter method that discriminates between primary reflection and multiple reflection in the τ-p domain based on move out difference between primary reflection and multiple reflection. However, inmore » case where the move out difference is too small, the Radon filter method is not enough to attenuate the multiple reflections. The Radon filter also produces the artifacts on the gathers data. Except the Radon filter method, we also use the Wave Equation Multiple Elimination (WEMR) method to attenuate the long period multiple reflection. The WEMR method can attenuate the long period multiple reflection based on wave equation inversion. Refer to the inversion of wave equation and the magnitude of the seismic wave amplitude that observed on the free surface, we get the water bottom reflectivity which is used to eliminate the multiple reflections. The WEMR method does not depend on the move out difference to attenuate the long period multiple reflection. Therefore, the WEMR method can be applied to the seismic data which has small move out difference as the Mentawai seismic data. The small move out difference on the Mentawai seismic data is caused by the restrictiveness of far offset, which is only 705 meter. We compared the real free multiple stacking data after processing with Radon filter and WEMR process. The conclusion is the WEMR method can more attenuate the long period multiple reflection than the Radon filter method on the real (Mentawai) seismic data.« less

  7. Solution Mask Liquid Lithography (SMaLL) for One-Step, Multimaterial 3D Printing.

    PubMed

    Dolinski, Neil D; Page, Zachariah A; Callaway, E Benjamin; Eisenreich, Fabian; Garcia, Ronnie V; Chavez, Roberto; Bothman, David P; Hecht, Stefan; Zok, Frank W; Hawker, Craig J

    2018-06-21

    A novel methodology for printing 3D objects with spatially resolved mechanical and chemical properties is reported. Photochromic molecules are used to control polymerization through coherent bleaching fronts, providing large depths of cure and rapid build rates without the need for moving parts. The coupling of these photoswitches with resin mixtures containing orthogonal photo-crosslinking systems allows simultaneous and selective curing of multiple networks, providing access to 3D objects with chemically and mechanically distinct domains. The power of this approach is showcased through the one-step fabrication of bioinspired soft joints and mechanically reinforced "brick-and-mortar" structures. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Certainty grids for mobile robots

    NASA Technical Reports Server (NTRS)

    Moravec, H. P.

    1987-01-01

    A numerical representation of uncertain and incomplete sensor knowledge called Certainty Grids has been used successfully in several mobile robot control programs, and has proven itself to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems. Researchers propose to build a software framework running on processors onboard the new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot's surroundings as it moves. The certainty grid representation will allow this map to be incrementally updated in a uniform way from various sources including sonar, stereo vision, proximity and contact sensors. The approach can correctly model the fuzziness of each reading, while at the same time combining multiple measurements to produce sharper map features, and it can deal correctly with uncertainties in the robot's motion. The map will be used by planning programs to choose clear paths, identify locations (by correlating maps), identify well-known and insufficiently sensed terrain, and perhaps identify objects by shape. The certainty grid representation can be extended in the same dimension and used to detect and track moving objects.

  9. Delineating the Neural Signatures of Tracking Spatial Position and Working Memory during Attentive Tracking

    PubMed Central

    Drew, Trafton; Horowitz, Todd S.; Wolfe, Jeremy M.; Vogel, Edward K.

    2015-01-01

    In the attentive tracking task, observers track multiple objects as they move independently and unpredictably among visually identical distractors. Although a number of models of attentive tracking implicate visual working memory as the mechanism responsible for representing target locations, no study has ever directly compared the neural mechanisms of the two tasks. In the current set of experiments, we used electrophysiological recordings to delineate similarities and differences between the neural processing involved in working memory and attentive tracking. We found that the contralateral electrophysiological response to the two tasks was similarly sensitive to the number of items attended in both tasks but that there was also a unique contralateral negativity related to the process of monitoring target position during tracking. This signal was absent for periods of time during tracking tasks when objects briefly stopped moving. These results provide evidence that, during attentive tracking, the process of tracking target locations elicits an electrophysiological response that is distinct and dissociable from neural measures of the number of items being attended. PMID:21228175

  10. Different motion cues are used to estimate time-to-arrival for frontoparallel and looming trajectories

    PubMed Central

    Calabro, Finnegan J.; Beardsley, Scott A.; Vaina, Lucia M.

    2012-01-01

    Estimation of time-to-arrival for moving objects is critical to obstacle interception and avoidance, as well as to timing actions such as reaching and grasping moving objects. The source of motion information that conveys arrival time varies with the trajectory of the object raising the question of whether multiple context-dependent mechanisms are involved in this computation. To address this question we conducted a series of psychophysical studies to measure observers’ performance on time-to-arrival estimation when object trajectory was specified by angular motion (“gap closure” trajectories in the frontoparallel plane), looming (colliding trajectories, TTC) or both (passage courses, TTP). We measured performance of time-to-arrival judgments in the presence of irrelevant motion, in which a perpendicular motion vector was added to the object trajectory. Data were compared to models of expected performance based on the use of different components of optical information. Our results demonstrate that for gap closure, performance depended only on the angular motion, whereas for TTC and TTP, both angular and looming motion affected performance. This dissociation of inputs suggests that gap closures are mediated by a separate mechanism than that used for the detection of time-to-collision and time-to-passage. We show that existing models of TTC and TTP estimation make systematic errors in predicting subject performance, and suggest that a model which weights motion cues by their relative time-to-arrival provides a better account of performance. PMID:22056519

  11. Acoustic facilitation of object movement detection during self-motion

    PubMed Central

    Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.

    2011-01-01

    In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050

  12. Taking a(c)count of eye movements: Multiple mechanisms underlie fixations during enumeration.

    PubMed

    Paul, Jacob M; Reeve, Robert A; Forte, Jason D

    2017-03-01

    We habitually move our eyes when we enumerate sets of objects. It remains unclear whether saccades are directed for numerosity processing as distinct from object-oriented visual processing (e.g., object saliency, scanning heuristics). Here we investigated the extent to which enumeration eye movements are contingent upon the location of objects in an array, and whether fixation patterns vary with enumeration demands. Twenty adults enumerated random dot arrays twice: first to report the set cardinality and second to judge the perceived number of subsets. We manipulated the spatial location of dots by presenting arrays at 0°, 90°, 180°, and 270° orientations. Participants required a similar time to enumerate the set or the perceived number of subsets in the same array. Fixation patterns were systematically shifted in the direction of array rotation, and distributed across similar locations when the same array was shown on multiple occasions. We modeled fixation patterns and dot saliency using a simple filtering model and show participants judged groups of dots in close proximity (2°-2.5° visual angle) as distinct subsets. Modeling results are consistent with the suggestion that enumeration involves visual grouping mechanisms based on object saliency, and specific enumeration demands affect spatial distribution of fixations. Our findings highlight the importance of set computation, rather than object processing per se, for models of numerosity processing.

  13. Multiple hearth furnace for reducing iron oxide

    DOEpatents

    Brandon, Mark M [Charlotte, NC; True, Bradford G [Charlotte, NC

    2012-03-13

    A multiple moving hearth furnace (10) having a furnace housing (11) with at least two moving hearths (20) positioned laterally within the furnace housing, the hearths moving in opposite directions and each moving hearth (20) capable of being charged with at least one layer of iron oxide and carbon bearing material at one end, and being capable of discharging reduced material at the other end. A heat insulating partition (92) is positioned between adjacent moving hearths of at least portions of the conversion zones (13), and is capable of communicating gases between the atmospheres of the conversion zones of adjacent moving hearths. A drying/preheat zone (12), a conversion zone (13), and optionally a cooling zone (15) are sequentially positioned along each moving hearth (30) in the furnace housing (11).

  14. CT brush and CancerZap!: two video games for computed tomography dose minimization.

    PubMed

    Alvare, Graham; Gordon, Richard

    2015-05-12

    X-ray dose from computed tomography (CT) scanners has become a significant public health concern. All CT scanners spray x-ray photons across a patient, including those using compressive sensing algorithms. New technologies make it possible to aim x-ray beams where they are most needed to form a diagnostic or screening image. We have designed a computer game, CT Brush, that takes advantage of this new flexibility. It uses a standard MART algorithm (Multiplicative Algebraic Reconstruction Technique), but with a user defined dynamically selected subset of the rays. The image appears as the player moves the CT brush over an initially blank scene, with dose accumulating with every "mouse down" move. The goal is to find the "tumor" with as few moves (least dose) as possible. We have successfully implemented CT Brush in Java and made it available publicly, requesting crowdsourced feedback on improving the open source code. With this experience, we also outline a "shoot 'em up game" CancerZap! for photon limited CT. We anticipate that human computing games like these, analyzed by methods similar to those used to understand eye tracking, will lead to new object dependent CT algorithms that will require significantly less dose than object independent nonlinear and compressive sensing algorithms that depend on sprayed photons. Preliminary results suggest substantial dose reduction is achievable.

  15. Effects of a Moving Distractor Object on Time-to-Contact Judgments

    ERIC Educational Resources Information Center

    Oberfeld, Daniel; Hecht, Heiko

    2008-01-01

    The effects of moving task-irrelevant objects on time-to-contact (TTC) judgments were examined in 5 experiments. Observers viewed a directly approaching target in the presence of a distractor object moving in parallel with the target. In Experiments 1 to 4, observers decided whether the target would have collided with them earlier or later than a…

  16. Remote sensing using MIMO systems

    DOEpatents

    Bikhazi, Nicolas; Young, William F; Nguyen, Hung D

    2015-04-28

    A technique for sensing a moving object within a physical environment using a MIMO communication link includes generating a channel matrix based upon channel state information of the MIMO communication link. The physical environment operates as a communication medium through which communication signals of the MIMO communication link propagate between a transmitter and a receiver. A spatial information variable is generated for the MIMO communication link based on the channel matrix. The spatial information variable includes spatial information about the moving object within the physical environment. A signature for the moving object is generated based on values of the spatial information variable accumulated over time. The moving object is identified based upon the signature.

  17. Treatment planning with intensity modulated particle therapy for multiple targets in stage IV non-small cell lung cancer

    NASA Astrophysics Data System (ADS)

    Anderle, Kristjan; Stroom, Joep; Vieira, Sandra; Pimentel, Nuno; Greco, Carlo; Durante, Marco; Graeff, Christian

    2018-01-01

    Intensity modulated particle therapy (IMPT) can produce highly conformal plans, but is limited in advanced lung cancer patients with multiple lesions due to motion and planning complexity. A 4D IMPT optimization including all motion states was expanded to include multiple targets, where each target (isocenter) is designated to specific field(s). Furthermore, to achieve stereotactic treatment planning objectives, target and OAR weights plus objective doses were automatically iteratively adapted. Finally, 4D doses were calculated for different motion scenarios. The results from our algorithm were compared to clinical stereotactic body radiation treatment (SBRT) plans. The study included eight patients with 24 lesions in total. Intended dose regimen for SBRT was 24 Gy in one fraction, but lower fractionated doses had to be delivered in three cases due to OAR constraints or failed plan quality assurance. The resulting IMPT treatment plans had no significant difference in target coverage compared to SBRT treatment plans. Average maximum point dose and dose to specific volume in OARs were on average 65% and 22% smaller with IMPT. IMPT could also deliver 24 Gy in one fraction in a patient where SBRT was limited due to the OAR vicinity. The developed algorithm shows the potential of IMPT in treatment of multiple moving targets in a complex geometry.

  18. Reallocating attention during multiple object tracking.

    PubMed

    Ericson, Justin M; Christensen, James C

    2012-07-01

    Wolfe, Place, and Horowitz (Psychonomic Bulletin & Review 14:344-349, 2007) found that participants were relatively unaffected by selecting and deselecting targets while performing a multiple object tracking task, such that maintaining tracking was possible for longer durations than the few seconds typically studied. Though this result was generally consistent with other findings on tracking duration (Franconeri, Jonathon, & Scimeca Psychological Science 21:920-925, 2010), it was inconsistent with research involving cuing paradigms, specifically precues (Pylyshyn & Annan Spatial Vision 19:485-504, 2006). In the present research, we broke down the addition and removal of targets into separate conditions and incorporated a simple performance model to evaluate the costs associated with the selection and deselection of moving targets. Across three experiments, we demonstrated evidence against a cost being associated with any shift in attention, but rather that varying the type of cue used for target deselection produces no additional cost to performance and that hysteresis effects are not induced by a reduction in tracking load.

  19. CCD Camera Lens Interface for Real-Time Theodolite Alignment

    NASA Technical Reports Server (NTRS)

    Wake, Shane; Scott, V. Stanley, III

    2012-01-01

    Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.

  20. Multiple capture locations for 3D ultrasound-guided robotic retrieval of moving bodies from a beating heart

    NASA Astrophysics Data System (ADS)

    Thienphrapa, Paul; Ramachandran, Bharat; Elhawary, Haytham; Taylor, Russell H.; Popovic, Aleksandra

    2012-02-01

    Free moving bodies in the heart pose a serious health risk as they may be released in the arteries causing blood flow disruption. These bodies may be the result of various medical conditions and trauma. The conventional approach to removing these objects involves open surgery with sternotomy, the use of cardiopulmonary bypass, and a wide resection of the heart muscle. We advocate a minimally invasive surgical approach using a flexible robotic end effector guided by 3D transesophageal echocardiography. In a phantom study, we track a moving body in a beating heart using a modified normalized cross-correlation method, with mean RMS errors of 2.3 mm. We previously found the foreign body motion to be fast and abrupt, rendering infeasible a retrieval method based on direct tracking. We proposed a strategy based on guiding a robot to the most spatially probable location of the fragment and securing it upon its reentry to said location. To improve efficacy in the context of a robotic retrieval system, we extend this approach by exploring multiple candidate capture locations. Salient locations are identified based on spatial probability, dwell time, and visit frequency; secondary locations are also examined. Aggregate results indicate that the location of highest spatial probability (50% occupancy) is distinct from the longest-dwelled location (0.84 seconds). Such metrics are vital in informing the design of a retrieval system and capture strategies, and they can be computed intraoperatively to select the best capture location based on constraints such as workspace, time, and device manipulability. Given the complex nature of fragment motion, the ability to analyze multiple capture locations is a desirable capability in an interventional system.

  1. Control of thermal therapies with moving power deposition field.

    PubMed

    Arora, Dhiraj; Minor, Mark A; Skliar, Mikhail; Roemer, Robert B

    2006-03-07

    A thermal therapy feedback control approach to control thermal dose using a moving power deposition field is developed and evaluated using simulations. A normal tissue safety objective is incorporated in the controller design by imposing constraints on temperature elevations at selected normal tissue locations. The proposed control technique consists of two stages. The first stage uses a model-based sliding mode controller that dynamically generates an 'ideal' power deposition profile which is generally unrealizable with available heating modalities. Subsequently, in order to approximately realize this spatially distributed idealized power deposition, a constrained quadratic optimizer is implemented to compute intensities and dwell times for a set of pre-selected power deposition fields created by a scanned focused transducer. The dwell times for various power deposition profiles are dynamically generated online as opposed to the commonly employed a priori-decided heating strategies. Dynamic intensity and trajectory generation safeguards the treatment outcome against modelling uncertainties and unknown disturbances. The controller is designed to enforce simultaneous activation of multiple normal tissue temperature constraints by rapidly switching between various power deposition profiles. The hypothesis behind the controller design is that the simultaneous activation of multiple constraints substantially reduces treatment time without compromising normal tissue safety. The controller performance and robustness with respect to parameter uncertainties is evaluated using simulations. The results demonstrate that the proposed controller can successfully deliver the desired thermal dose to the target while maintaining the temperatures at the user-specified normal tissue locations at or below the maximum allowable values. Although demonstrated for the case of a scanned focused ultrasound transducer, the developed approach can be extended to other heating modalities with moving deposition fields, such as external and interstitial ultrasound phased arrays, multiple radiofrequency needle applicators and microwave antennae.

  2. Perceptual impressions of causality are affected by common fate.

    PubMed

    White, Peter A

    2017-03-24

    Many studies of perceptual impressions of causality have used a stimulus in which a moving object (the launcher) contacts a stationary object (the target) and the latter then moves off. Such stimuli give rise to an impression that the launcher makes the target move. In the present experiments, instead of a single target object, an array of four vertically aligned objects was used. The launcher contacted none of them, but stopped at a point between the two central objects. The four objects then moved with similar motion properties, exhibiting the Gestalt property of common fate. Strong impressions of causality were reported for this stimulus. It is argued that the array of four objects was perceived, by the likelihood principle, as a single object with some parts unseen, that the launcher was perceived as contacting one of the unseen parts of this object, and that the causal impression resulted from that. Supporting that argument, stimuli in which kinematic features were manipulated so as to weaken or eliminate common fate yielded weaker impressions of causality.

  3. Moving vehicles segmentation based on Gaussian motion model

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Fang, Xiang Z.; Lin, Wei Y.

    2005-07-01

    Moving objects segmentation is a challenge in computer vision. This paper focuses on the segmentation of moving vehicles in dynamic scene. We analyses the psychology of human vision and present a framework for segmenting moving vehicles in the highway. The proposed framework consists of two parts. Firstly, we propose an adaptive background update method in which the background is updated according to the change of illumination conditions and thus can adapt to the change of illumination sensitively. Secondly, we construct a Gaussian motion model to segment moving vehicles, in which the motion vectors of the moving pixels are modeled as a Gaussian model and an on-line EM algorithm is used to update the model. The Gaussian distribution of the adaptive model is elevated to determine which moving vectors result from moving vehicles and which from other moving objects such as waving trees. Finally, the pixels with motion vector result from the moving vehicles are segmented. Experimental results of several typical scenes show that the proposed model can detect the moving vehicles correctly and is immune from influence of the moving objects caused by the waving trees and the vibration of camera.

  4. Moving Object Localization Based on UHF RFID Phase and Laser Clustering

    PubMed Central

    Fu, Yulu; Wang, Changlong; Liang, Gaoli; Zhang, Hua; Ur Rehman, Shafiq

    2018-01-01

    RFID (Radio Frequency Identification) offers a way to identify objects without any contact. However, positioning accuracy is limited since RFID neither provides distance nor bearing information about the tag. This paper proposes a new and innovative approach for the localization of moving object using a particle filter by incorporating RFID phase and laser-based clustering from 2d laser range data. First of all, we calculate phase-based velocity of the moving object based on RFID phase difference. Meanwhile, we separate laser range data into different clusters, and compute the distance-based velocity and moving direction of these clusters. We then compute and analyze the similarity between two velocities, and select K clusters having the best similarity score. We predict the particles according to the velocity and moving direction of laser clusters. Finally, we update the weights of the particles based on K clusters and achieve the localization of moving objects. The feasibility of this approach is validated on a Scitos G5 service robot and the results prove that we have successfully achieved a localization accuracy up to 0.25 m. PMID:29522458

  5. Integration across Time Determines Path Deviation Discrimination for Moving Objects

    PubMed Central

    Whitaker, David; Levi, Dennis M.; Kennedy, Graeme J.

    2008-01-01

    Background Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects–a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat. Methodology/Principal Findings Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a ‘scale invariant’ model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds. Conclusions/Significance Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects. PMID:18414653

  6. Come together, right now: dynamic overwriting of an object's history through common fate.

    PubMed

    Luria, Roy; Vogel, Edward K

    2014-08-01

    The objects around us constantly move and interact, and the perceptual system needs to monitor on-line these interactions and to update the object's status accordingly. Gestalt grouping principles, such as proximity and common fate, play a fundamental role in how we perceive and group these objects. Here, we investigated situations in which the initial object representation as a separate item was updated by a subsequent Gestalt grouping cue (i.e., proximity or common fate). We used a version of the color change detection paradigm, in which the objects started to move separately, then met and stayed stationary, or moved separately, met, and then continued to move together. We monitored the object representations on-line using the contralateral delay activity (CDA; an ERP component indicative of the number of maintained objects), during their movement, and after the objects disappeared and became working memory representations. The results demonstrated that the objects' representations (as indicated by the CDA amplitude) persisted as being separate, even after a Gestalt proximity cue (when the objects "met" and remained stationary on the same position). Only a strong common fate Gestalt cue (when the objects not just met but also moved together) was able to override the objects' initial separate status, creating an integrated representation. These results challenge the view that Gestalt principles cause reflexive grouping. Instead, the object initial representation plays an important role that can override even powerful grouping cues.

  7. Motion-induced error reduction by combining Fourier transform profilometry with phase-shifting profilometry.

    PubMed

    Li, Beiwen; Liu, Ziping; Zhang, Song

    2016-10-03

    We propose a hybrid computational framework to reduce motion-induced measurement error by combining the Fourier transform profilometry (FTP) and phase-shifting profilometry (PSP). The proposed method is composed of three major steps: Step 1 is to extract continuous relative phase maps for each isolated object with single-shot FTP method and spatial phase unwrapping; Step 2 is to obtain an absolute phase map of the entire scene using PSP method, albeit motion-induced errors exist on the extracted absolute phase map; and Step 3 is to shift the continuous relative phase maps from Step 1 to generate final absolute phase maps for each isolated object by referring to the absolute phase map with error from Step 2. Experiments demonstrate the success of the proposed computational framework for measuring multiple isolated rapidly moving objects.

  8. Research on measurement method of optical camouflage effect of moving object

    NASA Astrophysics Data System (ADS)

    Wang, Juntang; Xu, Weidong; Qu, Yang; Cui, Guangzhen

    2016-10-01

    Camouflage effectiveness measurement as an important part of the camouflage technology, which testing and measuring the camouflage effect of the target and the performance of the camouflage equipment according to the tactical and technical requirements. The camouflage effectiveness measurement of current optical band is mainly aimed at the static target which could not objectively reflect the dynamic camouflage effect of the moving target. This paper synthetical used technology of dynamic object detection and camouflage effect detection, the digital camouflage of the moving object as the research object, the adaptive background update algorithm of Surendra was improved, a method of optical camouflage effect detection using Lab-color space in the detection of moving-object was presented. The binary image of moving object is extracted by this measurement technology, in the sequence diagram, the characteristic parameters such as the degree of dispersion, eccentricity, complexity and moment invariants are constructed to construct the feature vector space. The Euclidean distance of moving target which through digital camouflage was calculated, the results show that the average Euclidean distance of 375 frames was 189.45, which indicated that the degree of dispersion, eccentricity, complexity and moment invariants of the digital camouflage graphics has a great difference with the moving target which not spray digital camouflage. The measurement results showed that the camouflage effect was good. Meanwhile with the performance evaluation module, the correlation coefficient of the dynamic target image range 0.1275 from 0.0035, and presented some ups and down. Under the dynamic condition, the adaptability of target and background was reflected. In view of the existing infrared camouflage technology, the next step, we want to carry out the camouflage effect measurement technology of the moving target based on infrared band.

  9. A Neuromorphic Architecture for Object Recognition and Motion Anticipation Using Burst-STDP

    PubMed Central

    Balduzzi, David; Tononi, Giulio

    2012-01-01

    In this work we investigate the possibilities offered by a minimal framework of artificial spiking neurons to be deployed in silico. Here we introduce a hierarchical network architecture of spiking neurons which learns to recognize moving objects in a visual environment and determine the correct motor output for each object. These tasks are learned through both supervised and unsupervised spike timing dependent plasticity (STDP). STDP is responsible for the strengthening (or weakening) of synapses in relation to pre- and post-synaptic spike times and has been described as a Hebbian paradigm taking place both in vitro and in vivo. We utilize a variation of STDP learning, called burst-STDP, which is based on the notion that, since spikes are expensive in terms of energy consumption, then strong bursting activity carries more information than single (sparse) spikes. Furthermore, this learning algorithm takes advantage of homeostatic renormalization, which has been hypothesized to promote memory consolidation during NREM sleep. Using this learning rule, we design a spiking neural network architecture capable of object recognition, motion detection, attention towards important objects, and motor control outputs. We demonstrate the abilities of our design in a simple environment with distractor objects, multiple objects moving concurrently, and in the presence of noise. Most importantly, we show how this neural network is capable of performing these tasks using a simple leaky-integrate-and-fire (LIF) neuron model with binary synapses, making it fully compatible with state-of-the-art digital neuromorphic hardware designs. As such, the building blocks and learning rules presented in this paper appear promising for scalable fully neuromorphic systems to be implemented in hardware chips. PMID:22615855

  10. Automatic acquisition of motion trajectories: tracking hockey players

    NASA Astrophysics Data System (ADS)

    Okuma, Kenji; Little, James J.; Lowe, David

    2003-12-01

    Computer systems that have the capability of analyzing complex and dynamic scenes play an essential role in video annotation. Scenes can be complex in such a way that there are many cluttered objects with different colors, shapes and sizes, and can be dynamic with multiple interacting moving objects and a constantly changing background. In reality, there are many scenes that are complex, dynamic, and challenging enough for computers to describe. These scenes include games of sports, air traffic, car traffic, street intersections, and cloud transformations. Our research is about the challenge of inventing a descriptive computer system that analyzes scenes of hockey games where multiple moving players interact with each other on a constantly moving background due to camera motions. Ultimately, such a computer system should be able to acquire reliable data by extracting the players" motion as their trajectories, querying them by analyzing the descriptive information of data, and predict the motions of some hockey players based on the result of the query. Among these three major aspects of the system, we primarily focus on visual information of the scenes, that is, how to automatically acquire motion trajectories of hockey players from video. More accurately, we automatically analyze the hockey scenes by estimating parameters (i.e., pan, tilt, and zoom) of the broadcast cameras, tracking hockey players in those scenes, and constructing a visual description of the data by displaying trajectories of those players. Many technical problems in vision such as fast and unpredictable players' motions and rapid camera motions make our challenge worth tackling. To the best of our knowledge, there have not been any automatic video annotation systems for hockey developed in the past. Although there are many obstacles to overcome, our efforts and accomplishments would hopefully establish the infrastructure of the automatic hockey annotation system and become a milestone for research in automatic video annotation in this domain.

  11. Structural geology practice and learning, from the perspective of cognitive science

    NASA Astrophysics Data System (ADS)

    Shipley, Thomas F.; Tikoff, Basil; Ormand, Carol; Manduca, Cathy

    2013-09-01

    Spatial ability is required by practitioners and students of structural geology and so, considering spatial skills in the context of cognitive science has the potential to improve structural geology teaching and practice. Spatial thinking skills may be organized using three dichotomies, which can be linked to structural geology practice. First, a distinction is made between separating (attending to part of a whole) and combining (linking together aspects of the whole). While everyone has a basic ability to separate and combine, experts attend to differences guided by experiences of rock properties in context. Second, a distinction is made between seeing the relations among multiple objects as separate items or the relations within a single object with multiple parts. Experts can flexibly consider relations among or between objects to optimally reason about different types of spatial problems. Third, a distinction is made between reasoning about stationary and moving objects. Experts recognize static configurations that encode a movement history, and create mental models of the processes that led to the static state. The observations and inferences made by a geologist leading a field trip are compared with the corresponding observations and inferences made by a cognitive psychologist interested in spatial learning. The presented framework provides a vocabulary for discussing spatial skills both within and between the fields of structural geology and cognitive psychology.

  12. The temporal dynamics of heading perception in the presence of moving objects

    PubMed Central

    Fajen, Brett R.

    2015-01-01

    Many forms of locomotion rely on the ability to accurately perceive one's direction of locomotion (i.e., heading) based on optic flow. Although accurate in rigid environments, heading judgments may be biased when independently moving objects are present. The aim of this study was to systematically investigate the conditions in which moving objects influence heading perception, with a focus on the temporal dynamics and the mechanisms underlying this bias. Subjects viewed stimuli simulating linear self-motion in the presence of a moving object and judged their direction of heading. Experiments 1 and 2 revealed that heading perception is biased when the object crosses or almost crosses the observer's future path toward the end of the trial, but not when the object crosses earlier in the trial. Nonetheless, heading perception is not based entirely on the instantaneous optic flow toward the end of the trial. This was demonstrated in Experiment 3 by varying the portion of the earlier part of the trial leading up to the last frame that was presented to subjects. When the stimulus duration was long enough to include the part of the trial before the moving object crossed the observer's path, heading judgments were less biased. The findings suggest that heading perception is affected by the temporal evolution of optic flow. The time course of dorsal medial superior temporal area (MSTd) neuron responses may play a crucial role in perceiving heading in the presence of moving objects, a property not captured by many existing models. PMID:26510765

  13. Modeling and query the uncertainty of network constrained moving objects based on RFID data

    NASA Astrophysics Data System (ADS)

    Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie

    2007-06-01

    The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.

  14. Robot environment expert system

    NASA Technical Reports Server (NTRS)

    Potter, J. L.

    1985-01-01

    The Robot Environment Expert System uses a hexidecimal tree data structure to model a complex robot environment where not only the robot arm moves, but also the robot itself and other objects may move. The hextree model allows dynamic updating, collision avoidance and path planning over time, to avoid moving objects.

  15. A Rotatable Quality Control Phantom for Evaluating the Performance of Flat Panel Detectors in Imaging Moving Objects.

    PubMed

    Haga, Yoshihiro; Chida, Koichi; Inaba, Yohei; Kaga, Yuji; Meguro, Taiichiro; Zuguchi, Masayuki

    2016-02-01

    As the use of diagnostic X-ray equipment with flat panel detectors (FPDs) has increased, so has the importance of proper management of FPD systems. To ensure quality control (QC) of FPD system, an easy method for evaluating FPD imaging performance for both stationary and moving objects is required. Until now, simple rotatable QC phantoms have not been available for the easy evaluation of the performance (spatial resolution and dynamic range) of FPD in imaging moving objects. We developed a QC phantom for this purpose. It consists of three thicknesses of copper and a rotatable test pattern of piano wires of various diameters. Initial tests confirmed its stable performance. Our moving phantom is very useful for QC of FPD images of moving objects because it enables visual evaluation of image performance (spatial resolution and dynamic range) easily.

  16. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.

  17. Putting a Twist on Inquiry

    ERIC Educational Resources Information Center

    Kemp, Andrew

    2005-01-01

    Everything moves. Even apparently stationary objects such as houses, roads, or mountains are moving because they sit on a spinning planet orbiting the Sun. Not surprisingly, the concepts of motion and the forces that affect moving objects are an integral part of the middle school science curriculum. However, middle school students are often taught…

  18. The Relativistic Wave Vector

    ERIC Educational Resources Information Center

    Houlrik, Jens Madsen

    2009-01-01

    The Lorentz transformation applies directly to the kinematics of moving particles viewed as geometric points. Wave propagation, on the other hand, involves moving planes which are extended objects defined by simultaneity. By treating a plane wave as a geometric object moving at the phase velocity, novel results are obtained that illustrate the…

  19. System for Thermal Imaging of Hot Moving Objects

    NASA Technical Reports Server (NTRS)

    Weinstein, Leonard; Hundley, Jason

    2007-01-01

    The High Altitude/Re-Entry Vehicle Infrared Imaging (HARVII) system is a portable instrumentation system for tracking and thermal imaging of a possibly distant and moving object. The HARVII is designed specifically for measuring the changing temperature distribution on a space shuttle as it reenters the atmosphere. The HARVII system or other systems based on the design of the HARVII system could also be used for such purposes as determining temperature distributions in fires, on volcanoes, and on surfaces of hot models in wind tunnels. In yet another potential application, the HARVII or a similar system would be used to infer atmospheric pollution levels from images of the Sun acquired at multiple wavelengths over regions of interest. The HARVII system includes the Ratio Intensity Thermography System (RITS) and a tracking subsystem that keeps the RITS aimed at the moving object of interest. The subsystem of primary interest here is the RITS (see figure), which acquires and digitizes images of the same scene at different wavelengths in rapid succession. Assuming that the time interval between successive measurements is short enough that temperatures do not change appreciably, the digitized image data at the different wavelengths are processed to extract temperatures according to the principle of ratio-intensity thermography: The temperature at a given location in a scene is inferred from the ratios between or among intensities of infrared radiation from that location at two or more wavelengths. This principle, based on the Stefan-Boltzmann equation for the intensity of electromagnetic radiation as a function of wavelength and temperature, is valid as long as the observed body is a gray or black body and there is minimal atmospheric absorption of radiation.

  20. Planning Paths Through Singularities in the Center of Mass Space

    NASA Technical Reports Server (NTRS)

    Doggett, William R.; Messner, William C.; Juang, Jer-Nan

    1998-01-01

    The center of mass space is a convenient space for planning motions that minimize reaction forces at the robot's base or optimize the stability of a mechanism. A unique problem associated with path planning in the center of mass space is the potential existence of multiple center of mass images for a single Cartesian obstacle, since a single center of mass location can correspond to multiple robot joint configurations. The existence of multiple images results in a need to either maintain multiple center of mass obstacle maps or to update obstacle locations when the robot passes through a singularity, such as when it moves from an elbow-up to an elbow-down configuration. To illustrate the concepts presented in this paper, a path is planned for an example task requiring motion through multiple center of mass space maps. The object of the path planning algorithm is to locate the bang- bang acceleration profile that minimizes the robot's base reactions in the presence of a single Cartesian obstacle. To simplify the presentation, only non-redundant robots are considered and joint non-linearities are neglected.

  1. Shadow detection of moving objects based on multisource information in Internet of things

    NASA Astrophysics Data System (ADS)

    Ma, Zhen; Zhang, De-gan; Chen, Jie; Hou, Yue-xian

    2017-05-01

    Moving object detection is an important part in intelligent video surveillance under the banner of Internet of things. The detection of moving target's shadow is also an important step in moving object detection. On the accuracy of shadow detection will affect the detection results of the object directly. Based on the variety of shadow detection method, we find that only using one feature can't make the result of detection accurately. Then we present a new method for shadow detection which contains colour information, the invariance of optical and texture feature. Through the comprehensive analysis of the detecting results of three kinds of information, the shadow was effectively determined. It gets ideal effect in the experiment when combining advantages of various methods.

  2. Pop-out in visual search of moving targets in the archer fish.

    PubMed

    Ben-Tov, Mor; Donchin, Opher; Ben-Shahar, Ohad; Segev, Ronen

    2015-03-10

    Pop-out in visual search reflects the capacity of observers to rapidly detect visual targets independent of the number of distracting objects in the background. Although it may be beneficial to most animals, pop-out behaviour has been observed only in mammals, where neural correlates are found in primary visual cortex as contextually modulated neurons that encode aspects of saliency. Here we show that archer fish can also utilize this important search mechanism by exhibiting pop-out of moving targets. We explore neural correlates of this behaviour and report the presence of contextually modulated neurons in the optic tectum that may constitute the neural substrate for a saliency map. Furthermore, we find that both behaving fish and neural responses exhibit additive responses to multiple visual features. These findings suggest that similar neural computations underlie pop-out behaviour in mammals and fish, and that pop-out may be a universal search mechanism across all vertebrates.

  3. Interactive exploration of surveillance video through action shot summarization and trajectory visualization.

    PubMed

    Meghdadi, Amir H; Irani, Pourang

    2013-12-01

    We propose a novel video visual analytics system for interactive exploration of surveillance video data. Our approach consists of providing analysts with various views of information related to moving objects in a video. To do this we first extract each object's movement path. We visualize each movement by (a) creating a single action shot image (a still image that coalesces multiple frames), (b) plotting its trajectory in a space-time cube and (c) displaying an overall timeline view of all the movements. The action shots provide a still view of the moving object while the path view presents movement properties such as speed and location. We also provide tools for spatial and temporal filtering based on regions of interest. This allows analysts to filter out large amounts of movement activities while the action shot representation summarizes the content of each movement. We incorporated this multi-part visual representation of moving objects in sViSIT, a tool to facilitate browsing through the video content by interactive querying and retrieval of data. Based on our interaction with security personnel who routinely interact with surveillance video data, we identified some of the most common tasks performed. This resulted in designing a user study to measure time-to-completion of the various tasks. These generally required searching for specific events of interest (targets) in videos. Fourteen different tasks were designed and a total of 120 min of surveillance video were recorded (indoor and outdoor locations recording movements of people and vehicles). The time-to-completion of these tasks were compared against a manual fast forward video browsing guided with movement detection. We demonstrate how our system can facilitate lengthy video exploration and significantly reduce browsing time to find events of interest. Reports from expert users identify positive aspects of our approach which we summarize in our recommendations for future video visual analytics systems.

  4. Single-shot ultrafast tomographic imaging by spectral multiplexing

    NASA Astrophysics Data System (ADS)

    Matlis, N. H.; Axley, A.; Leemans, W. P.

    2012-10-01

    Computed tomography has profoundly impacted science, medicine and technology by using projection measurements scanned over multiple angles to permit cross-sectional imaging of an object. The application of computed tomography to moving or dynamically varying objects, however, has been limited by the temporal resolution of the technique, which is set by the time required to complete the scan. For objects that vary on ultrafast timescales, traditional scanning methods are not an option. Here we present a non-scanning method capable of resolving structure on femtosecond timescales by using spectral multiplexing of a single laser beam to perform tomographic imaging over a continuous range of angles simultaneously. We use this technique to demonstrate the first single-shot ultrafast computed tomography reconstructions and obtain previously inaccessible structure and position information for laser-induced plasma filaments. This development enables real-time tomographic imaging for ultrafast science, and offers a potential solution to the challenging problem of imaging through scattering surfaces.

  5. Come Together, Right Now: Dynamic Overwriting of an Object’s History through Common Fate

    PubMed Central

    Luria, Roy; Vogel, Edward K.

    2015-01-01

    The objects around us constantly move and interact, and the perceptual system needs to monitor on-line these interactions and to update the object’s status accordingly. Gestalt grouping principles, such as proximity and common fate, play a fundamental role in how we perceive and group these objects. Here, we investigated situations in which the initial object representation as a separate item was updated by a subsequent Gestalt grouping cue (i.e., proximity or common fate). We used a version of the color change detection paradigm, in which the objects started to move separately, then met and stayed stationary, or moved separately, met, and then continued to move together. We monitored the object representations on-line using the contralateral delay activity (CDA; an ERP component indicative of the number of maintained objects), during their movement, and after the objects disappeared and became working memory representations. The results demonstrated that the objects’ representations (as indicated by the CDA amplitude) persisted as being separate, even after a Gestalt proximity cue (when the objects “met” and remained stationary on the same position). Only a strong common fate Gestalt cue (when the objects not just met but also moved together) was able to override the objects’ initial separate status, creating an integrated representation. These results challenge the view that Gestalt principles cause reflexive grouping. Instead, the object initial representation plays an important role that can override even powerful grouping cues. PMID:24564468

  6. Basic level category structure emerges gradually across human ventral visual cortex.

    PubMed

    Iordan, Marius Cătălin; Greene, Michelle R; Beck, Diane M; Fei-Fei, Li

    2015-07-01

    Objects can be simultaneously categorized at multiple levels of specificity ranging from very broad ("natural object") to very distinct ("Mr. Woof"), with a mid-level of generality (basic level: "dog") often providing the most cognitively useful distinction between categories. It is unknown, however, how this hierarchical representation is achieved in the brain. Using multivoxel pattern analyses, we examined how well each taxonomic level (superordinate, basic, and subordinate) of real-world object categories is represented across occipitotemporal cortex. We found that, although in early visual cortex objects are best represented at the subordinate level (an effect mostly driven by low-level feature overlap between objects in the same category), this advantage diminishes compared to the basic level as we move up the visual hierarchy, disappearing in object-selective regions of occipitotemporal cortex. This pattern stems from a combined increase in within-category similarity (category cohesion) and between-category dissimilarity (category distinctiveness) of neural activity patterns at the basic level, relative to both subordinate and superordinate levels, suggesting that successive visual areas may be optimizing basic level representations.

  7. Small Arrays for Seismic Intruder Detections: A Simulation Based Experiment

    NASA Astrophysics Data System (ADS)

    Pitarka, A.

    2014-12-01

    Seismic sensors such as geophones and fiber optic have been increasingly recognized as promising technologies for intelligence surveillance, including intruder detection and perimeter defense systems. Geophone arrays have the capability to provide cost effective intruder detection in protecting assets with large perimeters. A seismic intruder detection system uses one or multiple arrays of geophones design to record seismic signals from footsteps and ground vehicles. Using a series of real-time signal processing algorithms the system detects, classify and monitors the intruder's movement. We have carried out numerical experiments to demonstrate the capability of a seismic array to detect moving targets that generate seismic signals. The seismic source is modeled as a vertical force acting on the ground that generates continuous impulsive seismic signals with different predominant frequencies. Frequency-wave number analysis of the synthetic array data was used to demonstrate the array's capability at accurately determining intruder's movement direction. The performance of the array was also analyzed in detecting two or more objects moving at the same time. One of the drawbacks of using a single array system is its inefficiency at detecting seismic signals deflected by large underground objects. We will show simulation results of the effect of an underground concrete block at shielding the seismic signal coming from an intruder. Based on simulations we found that multiple small arrays can greatly improve the system's detection capability in the presence of underground structures. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344

  8. Moving Object Detection on a Vehicle Mounted Back-Up Camera

    PubMed Central

    Kim, Dong-Sun; Kwon, Jinsan

    2015-01-01

    In the detection of moving objects from vision sources one usually assumes that the scene has been captured by stationary cameras. In case of backing up a vehicle, however, the camera mounted on the vehicle moves according to the vehicle’s movement, resulting in ego-motions on the background. This results in mixed motion in the scene, and makes it difficult to distinguish between the target objects and background motions. Without further treatments on the mixed motion, traditional fixed-viewpoint object detection methods will lead to many false-positive detection results. In this paper, we suggest a procedure to be used with the traditional moving object detection methods relaxing the stationary cameras restriction, by introducing additional steps before and after the detection. We also decribe the implementation as a FPGA platform along with the algorithm. The target application of this suggestion is use with a road vehicle’s rear-view camera systems. PMID:26712761

  9. View-invariant object category learning, recognition, and search: how spatial and object attention are coordinated using surface-based attentional shrouds.

    PubMed

    Fazl, Arash; Grossberg, Stephen; Mingolla, Ennio

    2009-02-01

    How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.

  10. System and method for moving a probe to follow movements of tissue

    NASA Technical Reports Server (NTRS)

    Feldstein, C.; Andrews, T. W.; Crawford, D. W.; Cole, M. A. (Inventor)

    1981-01-01

    An apparatus is described for moving a probe that engages moving living tissue such as a heart or an artery that is penetrated by the probe, which moves the probe in synchronism with the tissue to maintain the probe at a constant location with respect to the tissue. The apparatus includes a servo positioner which moves a servo member to maintain a constant distance from a sensed object while applying very little force to the sensed object, and a follower having a stirrup at one end resting on a surface of the living tissue and another end carrying a sensed object adjacent to the servo member. A probe holder has one end mounted on the servo member and another end which holds the probe.

  11. Evaluation of Content-Matched Range Monitoring Queries over Moving Objects in Mobile Computing Environments

    PubMed Central

    Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo

    2015-01-01

    A content-matched (CM) range monitoring query over moving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CM range monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods. PMID:26393613

  12. Real-time people counting system using a single video camera

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain

    2008-02-01

    There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.

  13. Moving object detection and tracking in videos through turbulent medium

    NASA Astrophysics Data System (ADS)

    Halder, Kalyan Kumar; Tahtali, Murat; Anavatti, Sreenatha G.

    2016-06-01

    This paper addresses the problem of identifying and tracking moving objects in a video sequence having a time-varying background. This is a fundamental task in many computer vision applications, though a very challenging one because of turbulence that causes blurring and spatiotemporal movements of the background images. Our proposed approach involves two major steps. First, a moving object detection algorithm that deals with the detection of real motions by separating the turbulence-induced motions using a two-level thresholding technique is used. In the second step, a feature-based generalized regression neural network is applied to track the detected objects throughout the frames in the video sequence. The proposed approach uses the centroid and area features of the moving objects and creates the reference regions instantly by selecting the objects within a circle. Simulation experiments are carried out on several turbulence-degraded video sequences and comparisons with an earlier method confirms that the proposed approach provides a more effective tracking of the targets.

  14. Online phase measuring profilometry for rectilinear moving object by image correction

    NASA Astrophysics Data System (ADS)

    Yuan, Han; Cao, Yi-Ping; Chen, Chen; Wang, Ya-Pin

    2015-11-01

    In phase measuring profilometry (PMP), the object must be static for point-to-point reconstruction with the captured deformed patterns. While the object is rectilinearly moving online, the size and pixel position differences of the object in different captured deformed patterns do not meet the point-to-point requirement. We propose an online PMP based on image correction to measure the three-dimensional shape of the rectilinear moving object. In the proposed method, the deformed patterns captured by a charge-coupled diode camera are reprojected from the oblique view to an aerial view first and then translated based on the feature points of the object. This method makes the object appear stationary in the deformed patterns. Experimental results show the feasibility and efficiency of the proposed method.

  15. Moving object localization using optical flow for pedestrian detection from a moving vehicle.

    PubMed

    Hariyono, Joko; Hoang, Van-Dung; Jo, Kang-Hyun

    2014-01-01

    This paper presents a pedestrian detection method from a moving vehicle using optical flows and histogram of oriented gradients (HOG). A moving object is extracted from the relative motion by segmenting the region representing the same optical flows after compensating the egomotion of the camera. To obtain the optical flow, two consecutive images are divided into grid cells 14 × 14 pixels; then each cell is tracked in the current frame to find corresponding cell in the next frame. Using at least three corresponding cells, affine transformation is performed according to each corresponding cell in the consecutive images, so that conformed optical flows are extracted. The regions of moving object are detected as transformed objects, which are different from the previously registered background. Morphological process is applied to get the candidate human regions. In order to recognize the object, the HOG features are extracted on the candidate region and classified using linear support vector machine (SVM). The HOG feature vectors are used as input of linear SVM to classify the given input into pedestrian/nonpedestrian. The proposed method was tested in a moving vehicle and also confirmed through experiments using pedestrian dataset. It shows a significant improvement compared with original HOG using ETHZ pedestrian dataset.

  16. A Low-Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller

    DTIC Science & Technology

    2017-03-01

    A Low- Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller Jong Hwan Ko...Atlanta, GA 30332 USA Contact Author Email: jonghwan.ko@gatech.edu Abstract: This paper presents a low- power wireless image sensor node for...present a low- power wireless image sensor node with a noise-robust moving object detection and region-of-interest based rate controller [Fig. 1]. The

  17. [Metrological analysis of measuring systems in testing an anticipatory reaction to the position of a moving object].

    PubMed

    Aksiuta, E F; Ostashev, A V; Sergeev, E V; Aksiuta, V E

    1997-01-01

    The methods of the information (entropy) error theory were used to make a metrological analysis of the well-known commercial measuring systems for timing an anticipative reaction (AR) to the position of a moving object, which is based on the electromechanical, gas-discharge, and electron principles. The required accuracy of measurement was ascertained to be achieved only by using the systems based on the electron principle of moving object simulation and AR measurement.

  18. Make the First Move: How Infants Learn about Self-Propelled Objects

    ERIC Educational Resources Information Center

    Rakison, David H.

    2006-01-01

    In 3 experiments, the author investigated 16- to 20-month-old infants' attention to dynamic and static parts in learning about self-propelled objects. In Experiment 1, infants were habituated to simple noncausal events in which a geometric figure with a single moving part started to move without physical contact from an identical geometric figure…

  19. Challenges in Developing XML-Based Learning Repositories

    NASA Astrophysics Data System (ADS)

    Auksztol, Jerzy; Przechlewski, Tomasz

    There is no doubt that modular design has many advantages, including the most important ones: reusability and cost-effectiveness. In an e-leaming community parlance the modules are determined as Learning Objects (LOs) [11]. An increasing amount of learning objects have been created and published online, several standards has been established and multiple repositories developed for them. For example Cisco Systems, Inc., "recognizes a need to move from creating and delivering large inflexible training courses, to database-driven objects that can be reused, searched, and modified independent of their delivery media" [6]. The learning object paradigm of education resources authoring is promoted mainly to reduce the cost of the content development and to increase its quality. A frequently used metaphor of Learning Objects paradigm compares them to Lego Logs or objects in Object-Oriented program design [25]. However a metaphor is only an abstract idea, which should be turned to something more concrete to be usable. The problem is that many papers on LOs end up solely in metaphors. In our opinion Lego or OO metaphors are gross oversimplificatation of the problem as there is much easier to develop Lego set or design objects in OO program than develop truly interoperable, context-free learning content1.

  20. Binocular Perception of 2D Lateral Motion and Guidance of Coordinated Motor Behavior.

    PubMed

    Fath, Aaron J; Snapp-Childs, Winona; Kountouriotis, Georgios K; Bingham, Geoffrey P

    2016-04-01

    Zannoli, Cass, Alais, and Mamassian (2012) found greater audiovisual lag between a tone and disparity-defined stimuli moving laterally (90-170 ms) than for disparity-defined stimuli moving in depth or luminance-defined stimuli moving laterally or in depth (50-60 ms). We tested if this increased lag presents an impediment to visually guided coordination with laterally moving objects. Participants used a joystick to move a virtual object in several constant relative phases with a laterally oscillating stimulus. Both the participant-controlled object and the target object were presented using a disparity-defined display that yielded information through changes in disparity over time (CDOT) or using a luminance-defined display that additionally provided information through monocular motion and interocular velocity differences (IOVD). Performance was comparable for both disparity-defined and luminance-defined displays in all relative phases. This suggests that, despite lag, perception of lateral motion through CDOT is generally sufficient to guide coordinated motor behavior.

  1. Tabletop computed lighting for practical digital photography.

    PubMed

    Mohan, Ankit; Bailey, Reynold; Waite, Jonathan; Tumblin, Jack; Grimm, Cindy; Bodenheimer, Bobby

    2007-01-01

    We apply simplified image-based lighting methods to reduce the equipment, cost, time, and specialized skills required for high-quality photographic lighting of desktop-sized static objects such as museum artifacts. We place the object and a computer-steered moving-head spotlight inside a simple foam-core enclosure and use a camera to record photos as the light scans the box interior. Optimization, guided by interactive user sketching, selects a small set of these photos whose weighted sum best matches the user-defined target sketch. Unlike previous image-based relighting efforts, our method requires only a single area light source, yet it can achieve high-resolution light positioning to avoid multiple sharp shadows. A reduced version uses only a handheld light and may be suitable for battery-powered field photography equipment that fits into a backpack.

  2. Shared processing in multiple object tracking and visual working memory in the absence of response order and task order confounds

    PubMed Central

    Howe, Piers D. L.

    2017-01-01

    To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources. PMID:28410383

  3. Shared processing in multiple object tracking and visual working memory in the absence of response order and task order confounds.

    PubMed

    Lapierre, Mark D; Cropper, Simon J; Howe, Piers D L

    2017-01-01

    To understand how the visual system represents multiple moving objects and how those representations contribute to tracking, it is essential that we understand how the processes of attention and working memory interact. In the work described here we present an investigation of that interaction via a series of tracking and working memory dual-task experiments. Previously, it has been argued that tracking is resistant to disruption by a concurrent working memory task and that any apparent disruption is in fact due to observers making a response to the working memory task, rather than due to competition for shared resources. Contrary to this, in our experiments we find that when task order and response order confounds are avoided, all participants show a similar decrease in both tracking and working memory performance. However, if task and response order confounds are not adequately controlled for we find substantial individual differences, which could explain the previous conflicting reports on this topic. Our results provide clear evidence that tracking and working memory tasks share processing resources.

  4. Additivity of Feature-Based and Symmetry-Based Grouping Effects in Multiple Object Tracking

    PubMed Central

    Wang, Chundi; Zhang, Xuemin; Li, Yongna; Lyu, Chuang

    2016-01-01

    Multiple object tracking (MOT) is an attentional process wherein people track several moving targets among several distractors. Symmetry, an important indicator of regularity, is a general spatial pattern observed in natural and artificial scenes. According to the “laws of perceptual organization” proposed by Gestalt psychologists, regularity is a principle of perceptual grouping, such as similarity and closure. A great deal of research reported that feature-based similarity grouping (e.g., grouping based on color, size, or shape) among targets in MOT tasks can improve tracking performance. However, no additive feature-based grouping effects have been reported where the tracking objects had two or more features. “Additive effect” refers to a greater grouping effect produced by grouping based on multiple cues instead of one cue. Can spatial symmetry produce a similar grouping effect similar to that of feature similarity in MOT tasks? Are the grouping effects based on symmetry and feature similarity additive? This study includes four experiments to address these questions. The results of Experiments 1 and 2 demonstrated the automatic symmetry-based grouping effects. More importantly, an additive grouping effect of symmetry and feature similarity was observed in Experiments 3 and 4. Our findings indicate that symmetry can produce an enhanced grouping effect in MOT and facilitate the grouping effect based on color or shape similarity. The “where” and “what” pathways might have played an important role in the additive grouping effect. PMID:27199875

  5. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1992-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.

  6. New method for finding multiple meaningful trajectories

    NASA Astrophysics Data System (ADS)

    Bao, Zhonghao; Flachs, Gerald M.; Jordan, Jay B.

    1995-07-01

    Mathematical foundations and algorithms for efficiently finding multiple meaningful trajectories (FMMT) in a sequence of digital images are presented. A meaningful trajectory is motion created by a sentient being or by a device under the control of a sentient being. It is smooth and predictable over short time intervals. A meaningful trajectory can suddenly appear or disappear in sequence images. The development of the FMMT is based on these assumptions. A finite state machine in the FMMT is used to model the trajectories under the conditions of occlusions and false targets. Each possible trajectory is associated with an initial state of a finite state machine. When two frames of data are available, a linear predictor is used to predict the locations of all possible trajectories. All trajectories within a certain error bound are moved to a monitoring trajectory state. When trajectories attain three consecutive good predictions, they are moved to a valid trajectory state and considered to be locked into a tracking mode. If an object is occluded while in the valid trajectory state, the predicted position is used to continue to track; however, the confidence in the trajectory is lowered. If the trajectory confidence falls below a lower limit, the trajectory is terminated. Results are presented that illustrate the FMMT applied to track multiple munitions fired from a missile in a sequence of images. Accurate trajectories are determined even in poor images where the probabilities of miss and false alarm are very high.

  7. Searching for moving objects in HSC-SSP: Pipeline and preliminary results

    NASA Astrophysics Data System (ADS)

    Chen, Ying-Tung; Lin, Hsing-Wen; Alexandersen, Mike; Lehner, Matthew J.; Wang, Shiang-Yu; Wang, Jen-Hung; Yoshida, Fumi; Komiyama, Yutaka; Miyazaki, Satoshi

    2018-01-01

    The Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) is currently the deepest wide-field survey in progress. The 8.2 m aperture of the Subaru telescope is very powerful in detecting faint/small moving objects, including near-Earth objects, asteroids, centaurs and Tran-Neptunian objects (TNOs). However, the cadence and dithering pattern of the HSC-SSP are not designed for detecting moving objects, making it difficult to do so systematically. In this paper, we introduce a new pipeline for detecting moving objects (specifically TNOs) in a non-dedicated survey. The HSC-SSP catalogs are sliced into HEALPix partitions. Then, the stationary detections and false positives are removed with a machine-learning algorithm to produce a list of moving object candidates. An orbit linking algorithm and visual inspections are executed to generate the final list of detected TNOs. The preliminary results of a search for TNOs using this new pipeline on data from the first HSC-SSP data release (2014 March to 2015 November) present 231 TNO/Centaurs candidates. The bright candidates with Hr < 7.7 and i > 5 show that the best-fitting slope of a single power law to absolute magnitude distribution is 0.77. The g - r color distribution of hot HSC-SSP TNOs indicates a bluer peak at g - r = 0.9, which is consistent with the bluer peak of the bimodal color distribution in literature.

  8. Memory-Based Multiagent Coevolution Modeling for Robust Moving Object Tracking

    PubMed Central

    Wang, Yanjiang; Qi, Yujuan; Li, Yongping

    2013-01-01

    The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods. PMID:23843739

  9. Memory-based multiagent coevolution modeling for robust moving object tracking.

    PubMed

    Wang, Yanjiang; Qi, Yujuan; Li, Yongping

    2013-01-01

    The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods.

  10. Method and apparatus for hybrid position/force control of multi-arm cooperating robots

    NASA Technical Reports Server (NTRS)

    Hayati, Samad A. (Inventor)

    1989-01-01

    Two or more robotic arms having end effectors rigidly attached to an object to be moved are disclosed. A hybrid position/force control system is provided for driving each of the robotic arms. The object to be moved is represented as having a total mass that consists of the actual mass of the object to be moved plus the mass of the moveable arms that are rigidly attached to the moveable object. The arms are driven in a positive way by the hybrid control system to assure that each arm shares in the position/force applied to the object. The burden of actuation is shared by each arm in a non-conflicting way as the arm independently control the position of, and force upon, a designated point on the object.

  11. Multiple anatomy optimization of accumulated dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, W. Tyler, E-mail: watkinswt@virginia.edu; Siebers, Jeffrey V.; Moore, Joseph A.

    Purpose: To investigate the potential advantages of multiple anatomy optimization (MAO) for lung cancer radiation therapy compared to the internal target volume (ITV) approach. Methods: MAO aims to optimize a single fluence to be delivered under free-breathing conditions such that the accumulated dose meets the plan objectives, where accumulated dose is defined as the sum of deformably mapped doses computed on each phase of a single four dimensional computed tomography (4DCT) dataset. Phantom and patient simulation studies were carried out to investigate potential advantages of MAO compared to ITV planning. Through simulated delivery of the ITV- and MAO-plans, target dosemore » variations were also investigated. Results: By optimizing the accumulated dose, MAO shows the potential to ensure dose to the moving target meets plan objectives while simultaneously reducing dose to organs at risk (OARs) compared with ITV planning. While consistently superior to the ITV approach, MAO resulted in equivalent OAR dosimetry at planning objective dose levels to within 2% volume in 14/30 plans and to within 3% volume in 19/30 plans for each lung V20, esophagus V25, and heart V30. Despite large variations in per-fraction respiratory phase weights in simulated deliveries at high dose rates (e.g., treating 4/10 phases during single fraction beams) the cumulative clinical target volume (CTV) dose after 30 fractions and per-fraction dose were constant independent of planning technique. In one case considered, however, per-phase CTV dose varied from 74% to 117% of prescription implying the level of ITV-dose heterogeneity may not be appropriate with conventional, free-breathing delivery. Conclusions: MAO incorporates 4DCT information in an optimized dose distribution and can achieve a superior plan in terms of accumulated dose to the moving target and OAR sparing compared to ITV-plans. An appropriate level of dose heterogeneity in MAO plans must be further investigated.« less

  12. Gaze movements and spatial working memory in collision avoidance: a traffic intersection task

    PubMed Central

    Hardiess, Gregor; Hansmann-Roth, Sabrina; Mallot, Hanspeter A.

    2013-01-01

    Street crossing under traffic is an everyday activity including collision detection as well as avoidance of objects in the path of motion. Such tasks demand extraction and representation of spatio-temporal information about relevant obstacles in an optimized format. Relevant task information is extracted visually by the use of gaze movements and represented in spatial working memory. In a virtual reality traffic intersection task, subjects are confronted with a two-lane intersection where cars are appearing with different frequencies, corresponding to high and low traffic densities. Under free observation and exploration of the scenery (using unrestricted eye and head movements) the overall task for the subjects was to predict the potential-of-collision (POC) of the cars or to adjust an adequate driving speed in order to cross the intersection without collision (i.e., to find the free space for crossing). In a series of experiments, gaze movement parameters, task performance, and the representation of car positions within working memory at distinct time points were assessed in normal subjects as well as in neurological patients suffering from homonymous hemianopia. In the following, we review the findings of these experiments together with other studies and provide a new perspective of the role of gaze behavior and spatial memory in collision detection and avoidance, focusing on the following questions: (1) which sensory variables can be identified supporting adequate collision detection? (2) How do gaze movements and working memory contribute to collision avoidance when multiple moving objects are present and (3) how do they correlate with task performance? (4) How do patients with homonymous visual field defects (HVFDs) use gaze movements and working memory to compensate for visual field loss? In conclusion, we extend the theory of collision detection and avoidance in the case of multiple moving objects and provide a new perspective on the combined operation of external (bottom-up) and internal (top-down) cues in a traffic intersection task. PMID:23760667

  13. Millimeter wave radar system on a rotating platform for combined search and track functionality with SAR imaging

    NASA Astrophysics Data System (ADS)

    Aulenbacher, Uwe; Rech, Klaus; Sedlmeier, Johannes; Pratisto, Hans; Wellig, Peter

    2014-10-01

    Ground based millimeter wave radar sensors offer the potential for a weather-independent automatic ground surveillance at day and night, e.g. for camp protection applications. The basic principle and the experimental verification of a radar system concept is described, which by means of an extreme off-axis positioning of the antenna(s) combines azimuthal mechanical beam steering with the formation of a circular-arc shaped synthetic aperture (SA). In automatic ground surveillance the function of search and detection of moving ground targets is performed by means of the conventional mechanical scan mode. The rotated antenna structure designed as a small array with two or more RX antenna elements with simultaneous receiver chains allows to instantaneous track multiple moving targets (monopulse principle). The simultaneously operated SAR mode yields areal images of the distribution of stationary scatterers. For ground surveillance application this SAR mode is best suited for identifying possible threats by means of change detection. The feasibility of this concept was tested by means of an experimental radar system comprising of a 94 GHz (W band) FM-CW module with 1 GHz bandwidth and two RX antennas with parallel receiver channels, placed off-axis at a rotating platform. SAR mode and search/track mode were tested during an outdoor measurement campaign. The scenery of two persons walking along a road and partially through forest served as test for the capability to track multiple moving targets. For SAR mode verification an image of the area composed of roads, grassland, woodland and several man-made objects was reconstructed from the measured data.

  14. Feature-based interference from unattended visual field during attentional tracking in younger and older adults.

    PubMed

    Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman

    2011-02-01

    The ability to attend to multiple objects that move in the visual field is important for many aspects of daily functioning. The attentional capacity for such dynamic tracking, however, is highly limited and undergoes age-related decline. Several aspects of the tracking process can influence performance. Here, we investigated effects of feature-based interference from distractor objects that appear in unattended regions of the visual field with a hemifield-tracking task. Younger and older participants performed an attentional tracking task in one hemifield while distractor objects were concurrently presented in the unattended hemifield. Feature similarity between objects in the attended and unattended hemifields as well as motion speed and the number of to-be-tracked objects were parametrically manipulated. The results show that increasing feature overlap leads to greater interference from the unattended visual field. This effect of feature-based interference was only present in the slow speed condition, indicating that the interference is mainly modulated by perceptual demands. High-performing older adults showed a similar interference effect as younger adults, whereas low-performing adults showed poor tracking performance overall.

  15. Research on moving object detection based on frog's eyes

    NASA Astrophysics Data System (ADS)

    Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan

    2008-12-01

    On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.

  16. The Impact Imperative: A Space Infrastructure Enabling a Multi-Tiered Earth Defense

    NASA Technical Reports Server (NTRS)

    Campbell, Jonathan W.; Phipps, Claude; Smalley, Larry; Reilly, James; Boccio, Dona

    2003-01-01

    Impacting at hypervelocity, an asteroid struck the Earth approximately 65 million years ago in the Yucatan Peninsula a m . This triggered the extinction of almost 70% of the species of life on Earth including the dinosaurs. Other impacts prior to this one have caused even greater extinctions. Preventing collisions with the Earth by hypervelocity asteroids, meteoroids, and comets is the most important immediate space challenge facing human civilization. This is the Impact Imperative. We now believe that while there are about 2000 earth orbit crossing rocks greater than 1 kilometer in diameter, there may be as many as 200,000 or more objects in the 100 m size range. Can anything be done about this fundamental existence question facing our civilization? The answer is a resounding yes! By using an intelligent combination of Earth and space based sensors coupled with an infrastructure of high-energy laser stations and other secondary mitigation options, we can deflect inbound asteroids, meteoroids, and comets and prevent them &om striking the Earth. This can be accomplished by irradiating the surface of an inbound rock with sufficiently intense pulses so that ablation occurs. This ablation acts as a small rocket incrementally changing the shape of the rock's orbit around the Sun. One-kilometer size rocks can be moved sufficiently in about a month while smaller rocks may be moved in a shorter time span. We recommend that space objectives be immediately reprioritized to start us moving quickly towards an infrastructure that will support a multiple option defense capability. Planning and development for a lunar laser facility should be initiated immediately in parallel with other options. All mitigation options are greatly enhanced by robust early warning, detection, and tracking resources to find objects sufficiently prior to Earth orbit passage in time to allow significant intervention. Infrastructure options should include ground, LEO, GEO, Lunar, and libration point laser and sensor stations for providing early warning, tracking, and deflection. Other options should include space interceptors that will carry both laser and nuclear ablators for close range work. Response options must be developed to deal with the consequences of an impact should we move too slowly.

  17. Prediction processes during multiple object tracking (MOT): involvement of dorsal and ventral premotor cortices

    PubMed Central

    Atmaca, Silke; Stadler, Waltraud; Keitel, Anne; Ott, Derek V M; Lepsien, Jöran; Prinz, Wolfgang

    2013-01-01

    Background The multiple object tracking (MOT) paradigm is a cognitive task that requires parallel tracking of several identical, moving objects following nongoal-directed, arbitrary motion trajectories. Aims The current study aimed to investigate the employment of prediction processes during MOT. As an indicator for the involvement of prediction processes, we targeted the human premotor cortex (PM). The PM has been repeatedly implicated to serve the internal modeling of future actions and action effects, as well as purely perceptual events, by means of predictive feedforward functions. Materials and methods Using functional magnetic resonance imaging (fMRI), BOLD activations recorded during MOT were contrasted with those recorded during the execution of a cognitive control task that used an identical stimulus display and demanded similar attentional load. A particular effort was made to identify and exclude previously found activation in the PM-adjacent frontal eye fields (FEF). Results We replicated prior results, revealing occipitotemporal, parietal, and frontal areas to be engaged in MOT. Discussion The activation in frontal areas is interpreted to originate from dorsal and ventral premotor cortices. The results are discussed in light of our assumption that MOT engages prediction processes. Conclusion We propose that our results provide first clues that MOT does not only involve visuospatial perception and attention processes, but prediction processes as well. PMID:24363971

  18. MetaTracker: integration and abstraction of 3D motion tracking data from multiple hardware systems

    NASA Astrophysics Data System (ADS)

    Kopecky, Ken; Winer, Eliot

    2014-06-01

    Motion tracking has long been one of the primary challenges in mixed reality (MR), augmented reality (AR), and virtual reality (VR). Military and defense training can provide particularly difficult challenges for motion tracking, such as in the case of Military Operations in Urban Terrain (MOUT) and other dismounted, close quarters simulations. These simulations can take place across multiple rooms, with many fast-moving objects that need to be tracked with a high degree of accuracy and low latency. Many tracking technologies exist, such as optical, inertial, ultrasonic, and magnetic. Some tracking systems even combine these technologies to complement each other. However, there are no systems that provide a high-resolution, flexible, wide-area solution that is resistant to occlusion. While frameworks exist that simplify the use of tracking systems and other input devices, none allow data from multiple tracking systems to be combined, as if from a single system. In this paper, we introduce a method for compensating for the weaknesses of individual tracking systems by combining data from multiple sources and presenting it as a single tracking system. Individual tracked objects are identified by name, and their data is provided to simulation applications through a server program. This allows tracked objects to transition seamlessly from the area of one tracking system to another. Furthermore, it abstracts away the individual drivers, APIs, and data formats for each system, providing a simplified API that can be used to receive data from any of the available tracking systems. Finally, when single-piece tracking systems are used, those systems can themselves be tracked, allowing for real-time adjustment of the trackable area. This allows simulation operators to leverage limited resources in more effective ways, improving the quality of training.

  19. A sequential-move game for enhancing safety and security cooperation within chemical clusters.

    PubMed

    Pavlova, Yulia; Reniers, Genserik

    2011-02-15

    The present paper provides a game theoretic analysis of strategic cooperation on safety and security among chemical companies within a chemical industrial cluster. We suggest a two-stage sequential move game between adjacent chemical plants and the so-called Multi-Plant Council (MPC). The MPC is considered in the game as a leader player who makes the first move, and the individual chemical companies are the followers. The MPC's objective is to achieve full cooperation among players through establishing a subsidy system at minimum expense. The rest of the players rationally react to the subsidies proposed by the MPC and play Nash equilibrium. We show that such a case of conflict between safety and security, and social cooperation, belongs to the 'coordination with assurance' class of games, and we explore the role of cluster governance (fulfilled by the MPC) in achieving a full cooperative outcome in domino effects prevention negotiations. The paper proposes an algorithm that can be used by the MPC to develop the subsidy system. Furthermore, a stepwise plan to improve cross-company safety and security management in a chemical industrial cluster is suggested and an illustrative example is provided. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Dynamic information processing states revealed through neurocognitive models of object semantics

    PubMed Central

    Clarke, Alex

    2015-01-01

    Recognising objects relies on highly dynamic, interactive brain networks to process multiple aspects of object information. To fully understand how different forms of information about objects are represented and processed in the brain requires a neurocognitive account of visual object recognition that combines a detailed cognitive model of semantic knowledge with a neurobiological model of visual object processing. Here we ask how specific cognitive factors are instantiated in our mental processes and how they dynamically evolve over time. We suggest that coarse semantic information, based on generic shared semantic knowledge, is rapidly extracted from visual inputs and is sufficient to drive rapid category decisions. Subsequent recurrent neural activity between the anterior temporal lobe and posterior fusiform supports the formation of object-specific semantic representations – a conjunctive process primarily driven by the perirhinal cortex. These object-specific representations require the integration of shared and distinguishing object properties and support the unique recognition of objects. We conclude that a valuable way of understanding the cognitive activity of the brain is though testing the relationship between specific cognitive measures and dynamic neural activity. This kind of approach allows us to move towards uncovering the information processing states of the brain and how they evolve over time. PMID:25745632

  1. Early Knowledge of Object Motion: Continuity and Inertia.

    ERIC Educational Resources Information Center

    Spelke, Elizabeth; And Others

    1994-01-01

    Investigated whether infants infer that a hidden, freely moving object will move continuously and smoothly. Six- to 10- month olds inferred that the object's path would be connected and unobstructed, in accord with continuity. Younger infants did not infer this, in accord with inertia. At 8 and 10 months, knowledge of inertia emerged but remained…

  2. Visual Sensor Based Abnormal Event Detection with Moving Shadow Removal in Home Healthcare Applications

    PubMed Central

    Lee, Young-Sook; Chung, Wan-Young

    2012-01-01

    Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486

  3. Information extraction during simultaneous motion processing.

    PubMed

    Rideaux, Reuben; Edwards, Mark

    2014-02-01

    When confronted with multiple moving objects the visual system can process them in two stages: an initial stage in which a limited number of signals are processed in parallel (i.e. simultaneously) followed by a sequential stage. We previously demonstrated that during the simultaneous stage, observers could discriminate between presentations containing up to 5 vs. 6 spatially localized motion signals (Edwards & Rideaux, 2013). Here we investigate what information is actually extracted during the simultaneous stage and whether the simultaneous limit varies with the detail of information extracted. This was achieved by measuring the ability of observers to extract varied information from low detail, i.e. the number of signals presented, to high detail, i.e. the actual directions present and the direction of a specific element, during the simultaneous stage. The results indicate that the resolution of simultaneous processing varies as a function of the information which is extracted, i.e. as the information extraction becomes more detailed, from the number of moving elements to the direction of a specific element, the capacity to process multiple signals is reduced. Thus, when assigning a capacity to simultaneous motion processing, this must be qualified by designating the degree of information extraction. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  4. Three dimensional two-photon brain imaging in freely moving mice using a miniature fiber coupled microscope with active axial-scanning.

    PubMed

    Ozbay, Baris N; Futia, Gregory L; Ma, Ming; Bright, Victor M; Gopinath, Juliet T; Hughes, Ethan G; Restrepo, Diego; Gibson, Emily A

    2018-05-25

    We present a miniature head mounted two-photon fiber-coupled microscope (2P-FCM) for neuronal imaging with active axial focusing enabled using a miniature electrowetting lens. We show three-dimensional two-photon imaging of neuronal structure and record neuronal activity from GCaMP6s fluorescence from multiple focal planes in a freely-moving mouse. Two-color simultaneous imaging of GFP and tdTomato fluorescence is also demonstrated. Additionally, dynamic control of the axial scanning of the electrowetting lens allows tilting of the focal plane enabling neurons in multiple depths to be imaged in a single plane. Two-photon imaging allows increased penetration depth in tissue yielding a working distance of 450 μm with an additional 180 μm of active axial focusing. The objective NA is 0.45 with a lateral resolution of 1.8 μm, an axial resolution of 10 μm, and a field-of-view of 240 μm diameter. The 2P-FCM has a weight of only ~2.5 g and is capable of repeatable and stable head-attachment. The 2P-FCM with dynamic axial scanning provides a new capability to record from functionally distinct neuronal layers, opening new opportunities in neuroscience research.

  5. XOPPS - OEL PROJECT PLANNER/SCHEDULER TOOL

    NASA Technical Reports Server (NTRS)

    Mulnix, C. L.

    1994-01-01

    XOPPS is a window-based graphics tool for scheduling and project planning that provides easy and fast on-screen WYSIWYG editing capabilities. It has a canvas area which displays the full image of the schedule being edited. The canvas contains a header area for text and a schedule area for plotting graphic representations of milestone objects in a flexible timeline. XOPPS is object-oriented, but it is unique in its capability for creating objects that have date attributes. Each object on the screen can be treated as a unit for moving, editing, etc. There is a mouse interface for simple control of pointer location. The user can position objects to pixel resolution, but objects with an associated date are positioned automatically in their correct timeline position in the schedule area. The schedule area has horizontal lines across the page with capabilities for multiple pages and for editing the number of lines per page and the line grid. The text on a line can be edited and a line can be moved with all objects on the line moving with it. The timeline display can be edited to plot any time period in a variety of formats from Fiscal year to Calendar Year and days to years. Text objects and image objects (rasterfiles and icons) can be created for placement anywhere on the page. Milestone event objects with a single associated date (and optional text and milestone symbol) and activity objects with start and end dates (and an optional completion date) have unique editing panels for entering data. A representation for schedule slips is also provided with the capability to automatically convert a milestone event to a slip. A milestone schedule on another computer can be saved to an ASCII file to be read by XOPPS. The program can print a schedule to a PostScript file. Dependencies between objects can also be displayed on the chart through the use of precedence lines. This program is not intended to replace a commercial scheduling/project management program. Because XOPPS has an ASCII file interface it can be used in conjunction with a project management tool to produce schedules with a quality appearance. XOPPS is written in C-language for Sun series workstations running SunOS. This package requires MIT's X Window System, Version 11 Revision 4, with OSF/Motif 1.1. A sample executable is included. XOPPS requires 375K main memory and 1.5Mb free disk space for execution. The standard distribution medium is a .25 inch streaming magnetic tape cartridge in UNIX tar format. XOPPS was developed in 1992, based on the Sunview version of OPPS (NPO-18439) developed in 1990. It is a copyrighted work with all copyright vested in NASA.

  6. Acoustic system for material transport

    NASA Technical Reports Server (NTRS)

    Barmatz, M. B.; Trinh, E. H.; Wang, T. G.; Elleman, D. D.; Jacobi, N. (Inventor)

    1983-01-01

    An object within a chamber is acoustically moved by applying wavelengths of different modes to the chamber to move the object between pressure wells formed by the modes. In one system, the object is placed in one end of the chamber while a resonant mode, applied along the length of the chamber, produces a pressure well at the location. The frequency is then switched to a second mode that produces a pressure well at the center of the chamber, to draw the object. When the object reaches the second pressure well and is still traveling towards the second end of the chamber, the acoustic frequency is again shifted to a third mode (which may equal the first model) that has a pressure well in the second end portion of the chamber, to draw the object. A heat source may be located near the second end of the chamber to heat the sample, and after the sample is heated it can be cooled by moving it in a corresponding manner back to the first end of the chamber. The transducers for levitating and moving the object may be all located at the cool first end of the chamber.

  7. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

    PubMed Central

    Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash

    2015-01-01

    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations. PMID:25642198

  8. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements.

    PubMed

    Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash

    2014-01-01

    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.

  9. Intersensory Redundancy Facilitates Learning of Arbitrary Relations between Vowel Sounds and Objects in Seven-Month-Old Infants.

    ERIC Educational Resources Information Center

    Gogate, Lakshmi J.; Bahrick, Lorraine E.

    1998-01-01

    Investigated 7-month olds' ability to relate vowel sounds with objects when intersensory redundancy was present versus absent. Found that infants detected a mismatch in the vowel-object pairs in the moving-synchronous condition but not in the still or moving-asynchronous condition, demonstrating that temporal synchrony between vocalizations and…

  10. May the Force Be with You!

    ERIC Educational Resources Information Center

    Young, Timothy; Guy, Mark

    2011-01-01

    Students have a difficult time understanding force, especially when dealing with a moving object. Many forces can be acting on an object at the same time, causing it to stay in one place or move. By directly observing these forces, students can better understand the effect these forces have on an object. With a simple, student-built device called…

  11. Radar based autonomous sensor module

    NASA Astrophysics Data System (ADS)

    Styles, Tim

    2016-10-01

    Most surveillance systems combine camera sensors with other detection sensors that trigger an alert to a human operator when an object is detected. The detection sensors typically require careful installation and configuration for each application and there is a significant burden on the operator to react to each alert by viewing camera video feeds. A demonstration system known as Sensing for Asset Protection with Integrated Electronic Networked Technology (SAPIENT) has been developed to address these issues using Autonomous Sensor Modules (ASM) and a central High Level Decision Making Module (HLDMM) that can fuse the detections from multiple sensors. This paper describes the 24 GHz radar based ASM, which provides an all-weather, low power and license exempt solution to the problem of wide area surveillance. The radar module autonomously configures itself in response to tasks provided by the HLDMM, steering the transmit beam and setting range resolution and power levels for optimum performance. The results show the detection and classification performance for pedestrians and vehicles in an area of interest, which can be modified by the HLDMM without physical adjustment. The module uses range-Doppler processing for reliable detection of moving objects and combines Radar Cross Section and micro-Doppler characteristics for object classification. Objects are classified as pedestrian or vehicle, with vehicle sub classes based on size. Detections are reported only if the object is detected in a task coverage area and it is classified as an object of interest. The system was shown in a perimeter protection scenario using multiple radar ASMs, laser scanners, thermal cameras and visible band cameras. This combination of sensors enabled the HLDMM to generate reliable alerts with improved discrimination of objects and behaviours of interest.

  12. The Hercules-Lyra association revisited. New age estimation and multiplicity study

    NASA Astrophysics Data System (ADS)

    Eisenbeiss, T.; Ammler-von Eiff, M.; Roell, T.; Mugrauer, M.; Adam, Ch.; Neuhäuser, R.; Schmidt, T. O. B.; Bedalov, A.

    2013-08-01

    Context. The Hercules-Lyra association, a purported nearby young moving group, contains a few tens of zero age main sequence stars of spectral types F to M. The existence and the properties of the Her-Lyr association are controversial and have been discussed in the literature. Aims: The present work reassesses the properties and the member list of the Her-Lyr association based on kinematics and age indicators. Many objects form multiple systems or have low-mass companions and so we need to properly account for multiplicity. Methods: We use our own new imaging observations and archival data to identify multiple systems. The colors and magnitudes of kinematic candidates are compared to isochrones. We derive further information on the age based on Li depletion, rotation, and coronal and chromospheric activity. A set of canonical members is identified to infer mean properties. Membership criteria are derived from the mean properties and used to discard non-members. Results: The candidates selected from the literature belong to 35 stellar systems, 42.9% of which are multiple. Four multiple systems (V538 Aur, DX Leo, V382 Ser, and HH Leo) are confirmed in this work by common proper motion. An orbital solution is presented for the binary system which forms a hierarchical triple with HH Leo. Indeed, a group of candidates displays signatures of youth. Seven canonical members are identified The distribution of Li equivalent widths of canonical Her-Lyr members is spread widely and is similar to that of the Pleiades and the UMa group. Gyrochronology gives an age of 257 ± 46 Myr which is roughly in between the ages of the Pleiades and the Ursa Major group. The measures of chromospheric and coronal activity support the young age. Four membership criteria are presented based on kinematics, lithium equivalent width, chromospheric activity, and gyrochronological age. In total, eleven stars are identified as certain members including co-moving objects plus additional 23 possible members while 14 candidates are doubtful or can be rejected. A comparison to the mass function, however, indicates the presence of a large number of additional low-mass members, which remain unidentified. Based on observations made with ESO Telescopes at the Paranal Observatory under programs ID: 380.C-0248(A) (Service Mode, VLT-Yepun) and ID: 074.C-0084(B) (on 2005 Jan. 06, VLT-Yepun).Based on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck Institut für Astronomie and the Instituto de Astrofísica de Andalucía (CSIC).

  13. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process.

    PubMed

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-12

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.

  14. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process †

    PubMed Central

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-01

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210

  15. Error analysis of motion correction method for laser scanning of moving objects

    NASA Astrophysics Data System (ADS)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  16. Different approaches for centralized and decentralized water system management in multiple decision makers' problems

    NASA Astrophysics Data System (ADS)

    Anghileri, D.; Giuliani, M.; Castelletti, A.

    2012-04-01

    There is a general agreement that one of the most challenging issues related to water system management is the presence of many and often conflicting interests as well as the presence of several and independent decision makers. The traditional approach to multi-objective water systems management is a centralized management, in which an ideal central regulator coordinates the operation of the whole system, exploiting all the available information and balancing all the operating objectives. Although this approach allows to obtain Pareto-optimal solutions representing the maximum achievable benefit, it is based on assumptions which strongly limits its application in real world contexts: 1) top-down management, 2) existence of a central regulation institution, 3) complete information exchange within the system, 4) perfect economic efficiency. A bottom-up decentralized approach seems therefore to be more suitable for real case applications since different reservoir operators may maintain their independence. In this work we tested the consequences of a change in the water management approach moving from a centralized toward a decentralized one. In particular we compared three different cases: the centralized management approach, the independent management approach where each reservoir operator takes the daily release decision maximizing (or minimizing) his operating objective independently from each other, and an intermediate approach, leading to the Nash equilibrium of the associated game, where different reservoir operators try to model the behaviours of the other operators. The three approaches are demonstrated using a test case-study composed of two reservoirs regulated for the minimization of flooding in different locations. The operating policies are computed by solving one single multi-objective optimal control problem, in the centralized management approach; multiple single-objective optimization problems, i.e. one for each operator, in the independent case; using techniques related to game theory for the description of the interaction between the two operators, in the last approach. Computational results shows that the Pareto-optimal control policies obtained in the centralized approach dominate the control policies of both the two cases of decentralized management and that the so called price of anarchy increases moving toward the independent management approach. However, the Nash equilibrium solution seems to be the most promising alternative because it represents a good compromise in maximizing management efficiency without limiting the behaviours of the reservoir operators.

  17. Some characteristics of optokinetic eye-movement patterns : a comparative study.

    DOT National Transportation Integrated Search

    1970-07-01

    Long-associated with transportation ('railroad nystagmus'), optokinetic (OPK) nystagmus is an eye-movement reaction which occurs when a series of moving objects crosses the visual field or when an observer moves past a series of objects. Similar cont...

  18. A-Track: A new approach for detection of moving objects in FITS images

    NASA Astrophysics Data System (ADS)

    Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.

    2016-10-01

    We have developed a fast, open-source, cross-platform pipeline, called A-Track, for detecting the moving objects (asteroids and comets) in sequential telescope images in FITS format. The pipeline is coded in Python 3. The moving objects are detected using a modified line detection algorithm, called MILD. We tested the pipeline on astronomical data acquired by an SI-1100 CCD with a 1-meter telescope. We found that A-Track performs very well in terms of detection efficiency, stability, and processing time. The code is hosted on GitHub under the GNU GPL v3 license.

  19. Effects of sport expertise on representational momentum during timing control.

    PubMed

    Nakamoto, Hiroki; Mori, Shiro; Ikudome, Sachi; Unenaka, Satoshi; Imanaka, Kuniyasu

    2015-04-01

    Sports involving fast visual perception require players to compensate for delays in neural processing of visual information. Memory for the final position of a moving object is distorted forward along its path of motion (i.e., "representational momentum," RM). This cognitive extrapolation of visual perception might compensate for the neural delay in interacting appropriately with a moving object. The present study examined whether experienced batters cognitively extrapolate the location of a fast-moving object and whether this extrapolation is associated with coincident timing control. Nine expert and nine novice baseball players performed a prediction motion task in which a target moved from one end of a straight 400-cm track at a constant velocity. In half of the trials, vision was suddenly occluded when the target reached the 200-cm point (occlusion condition). Participants had to press a button concurrently with the target arrival at the end of the track and verbally report their subjective assessment of the first target-occluded position. Experts showed larger RM magnitude (cognitive extrapolation) than did novices in the occlusion condition. RM magnitude and timing errors were strongly correlated in the fast velocity condition in both experts and novices, whereas in the slow velocity condition, a significant correlation appeared only in experts. This suggests that experts can cognitively extrapolate the location of a moving object according to their anticipation and, as a result, potentially circumvent neural processing delays. This process might be used to control response timing when interacting with moving objects.

  20. Static latching arrangement and method

    DOEpatents

    Morrison, Larry

    1988-01-01

    A latching assembly for use in latching a cable to and unlatching it from a given object in order to move an object from one location to another is disclosed herein. This assembly includes a weighted sphere mounted to one end of a cable so as to rotate about a specific diameter of the sphere. The assembly also includes a static latch adapted for connection with the object to be moved. This latch includes an internal latching cavity for containing the sphere in a latching condition and a series of surfaces and openings which cooperate with the sphere in order to move the sphere into and out of the latching cavity and thereby connect the cable to and disconnect it from the latch without using any moving parts on the latch itself.

  1. Inattentional blindness is influenced by exposure time not motion speed.

    PubMed

    Kreitz, Carina; Furley, Philip; Memmert, Daniel

    2016-01-01

    Inattentional blindness is a striking phenomenon in which a salient object within the visual field goes unnoticed because it is unexpected, and attention is focused elsewhere. Several attributes of the unexpected object, such as size and animacy, have been shown to influence the probability of inattentional blindness. At present it is unclear whether or how the speed of a moving unexpected object influences inattentional blindness. We demonstrated that inattentional blindness rates are considerably lower if the unexpected object moves more slowly, suggesting that it is the mere exposure time of the object rather than a higher saliency potentially induced by higher speed that determines the likelihood of its detection. Alternative explanations could be ruled out: The effect is not based on a pop-out effect arising from different motion speeds in relation to the primary-task stimuli (Experiment 2), nor is it based on a higher saliency of slow-moving unexpected objects (Experiment 3).

  2. Assisting persons with multiple disabilities to move through simple occupational activities with automatic prompting.

    PubMed

    Lancioni, Giulio E; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff; Oliva, Doretta; Campodonico, Francesca; Groeneweg, Jop

    2008-01-01

    The present study assessed the possibility of assisting four persons with multiple disabilities to move through and perform simple occupational activities arranged within a room with the help of automatic prompting. The study involved two multiple probe designs across participants. The first multiple probe concerned the two participants with blindness or minimal vision and deafness, who received air blowing as a prompt. The second multiple probe concerned the two participants with blindness and typical hearing who received a voice calling as a prompt. Initially, all participants had baseline sessions. Then intervention started with the first participant of each dyad. When their performance was consolidated, new baseline and intervention occurred with the second participant of each dyad. Finally, all four participants were exposed to a second intervention phase, in which the number of activities per session doubled (i.e., from 8 to 16). Data showed that all four participants: (a) learned to move across and perform the activities available with the help of automatic prompting and (b) remained highly successful through the second intervention phase when the sessions were extended. Implications of the findings are discussed.

  3. Upside-down: Perceived space affects object-based attention.

    PubMed

    Papenmeier, Frank; Meyerhoff, Hauke S; Brockhoff, Alisa; Jahn, Georg; Huff, Markus

    2017-07-01

    Object-based attention influences the subjective metrics of surrounding space. However, does perceived space influence object-based attention, as well? We used an attentive tracking task that required sustained object-based attention while objects moved within a tracking space. We manipulated perceived space through the availability of depth cues and varied the orientation of the tracking space. When rich depth cues were available (appearance of a voluminous tracking space), the upside-down orientation of the tracking space (objects appeared to move high on a ceiling) caused a pronounced impairment of tracking performance compared with an upright orientation of the tracking space (objects appeared to move on a floor plane). In contrast, this was not the case when reduced depth cues were available (appearance of a flat tracking space). With a preregistered second experiment, we showed that those effects were driven by scene-based depth cues and not object-based depth cues. We conclude that perceived space affects object-based attention and that object-based attention and perceived space are closely interlinked. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  5. Constraints in distortion-invariant target recognition system simulation

    NASA Astrophysics Data System (ADS)

    Iftekharuddin, Khan M.; Razzaque, Md A.

    2000-11-01

    Automatic target recognition (ATR) is a mature but active research area. In an earlier paper, we proposed a novel ATR approach for recognition of targets varying in fine details, rotation, and translation using a Learning Vector Quantization (LVQ) Neural Network (NN). The proposed approach performed segmentation of multiple objects and the identification of the objects using LVQNN. In this current paper, we extend the previous approach for recognition of targets varying in rotation, translation, scale, and combination of all three distortions. We obtain the analytical results of the system level design to show that the approach performs well with some constraints. The first constraint determines the size of the input images and input filters. The second constraint shows the limits on amount of rotation, translation, and scale of input objects. We present the simulation verification of the constraints using DARPA's Moving and Stationary Target Recognition (MSTAR) images with different depression and pose angles. The simulation results using MSTAR images verify the analytical constraints of the system level design.

  6. The Movement of Teachers within Ontario School Boards

    ERIC Educational Resources Information Center

    Sibbald, Timothy

    2017-01-01

    This study examines teacher movement between secondary schools within the same school board using qualitative multiple case study. Interviews were conducted with each participant before moving, shortly after moving, and a period of time after moving schools. The coding of the interviews found evidence corroborating known themes of leadership,…

  7. Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus

    NASA Astrophysics Data System (ADS)

    Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.

    2014-09-01

    There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is used to assess the performance of the algorithm. In the second application, a visible imager operated in sidereal mode observes geostationary objects as moving, stars as fixed except for field rotation, and non-geostationary objects as drifting. RANSAC-MT is used to detect the drifter. In this set of data, the drifting space object was detected at a distance of 13800 km. The AFRL/RH set of data, collected in the stare mode, contained the signature of two geostationary satellites. The signature of a moving object was simulated and added to the sequence of frames to determine the sensitivity in magnitude. The performance compares well with the more intensive TBD algorithms reported in the literature.

  8. Evaluation of Content-Matched Range Monitoring Queries over Moving Objects in Mobile Computing Environments.

    PubMed

    Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo

    2015-09-18

    A content-matched (CM) rangemonitoring query overmoving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CMrange monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods.

  9. Detection of dominant flow and abnormal events in surveillance video

    NASA Astrophysics Data System (ADS)

    Kwak, Sooyeong; Byun, Hyeran

    2011-02-01

    We propose an algorithm for abnormal event detection in surveillance video. The proposed algorithm is based on a semi-unsupervised learning method, a kind of feature-based approach so that it does not detect the moving object individually. The proposed algorithm identifies dominant flow without individual object tracking using a latent Dirichlet allocation model in crowded environments. It can also automatically detect and localize an abnormally moving object in real-life video. The performance tests are taken with several real-life databases, and their results show that the proposed algorithm can efficiently detect abnormally moving objects in real time. The proposed algorithm can be applied to any situation in which abnormal directions or abnormal speeds are detected regardless of direction.

  10. Water transport to circumprimary habitable zones from icy planetesimal disks in binary star systems

    NASA Astrophysics Data System (ADS)

    Bancelin, D.; Pilat-Lohinger, E.; Maindl, T. I.; Bazsó, Á.

    2017-03-01

    So far, more than 130 extrasolar planets have been found in multiple stellar systems. Dynamical simulations show that the outcome of the planetary formation process can lead to different planetary architectures (i.e. location, size, mass, and water content) when the star system is single or double. In the late phase of planetary formation, when embryo-sized objects dominate the inner region of the system, asteroids are also present and can provide additional material for objects inside the habitable zone (HZ). In this study, we make a comparison of several binary star systems and aim to show how efficient they are at moving icy asteroids from beyond the snow line into orbits crossing the HZ. We also analyze the influence of secular and mean motion resonances on the water transport towards the HZ. Our study shows that small bodies also participate in bearing a non-negligible amount of water to the HZ. The proximity of a companion moving on an eccentric orbit increases the flux of asteroids to the HZ, which could result in a more efficient water transport on a short timescale, causing a heavy bombardment. In contrast to asteroids moving under the gravitational perturbations of one G-type star and a gas giant, we show that the presence of a companion star not only favors a faster depletion of our disk of planetesimals, but can also bring 4-5 times more water into the whole HZ. However, due to the secular resonance located either inside the HZ or inside the asteroid belt, impacts between icy planetesimals from the disk and big objects in the HZ can occur at high impact speed. Therefore, real collision modeling using a GPU 3D-SPH code show that in reality, the water content of the projectile is greatly reduced and therefore, also the water transported to planets or embryos initially inside the HZ.

  11. A design of optical measurement laboratory for space-based illumination condition emulation

    NASA Astrophysics Data System (ADS)

    Xu, Rong; Zhao, Fei; Yang, Xin

    2015-10-01

    Space Objects Identification(SOI) and related technology have aroused wide attention from spacefaring nations due to the increasingly severe space environment. Multiple ground-based assets have been employed to acquire statistical survey data, detect faint debris, acquire photometric and spectroscopic data. Great efforts have been made to characterize different space objects using the statistical data acquired by telescopes. Furthermore, detailed laboratory data are needed to optimize the characterization of orbital debris and satellites via material composition and potential rotation axes, which calls for a high-precision and flexible optical measurement system. A typical method of taking optical measurements of a space object(or model) is to move light source and sensors through every possible orientation around it and keep the target still. However, moving equipments to accurate orientations in the air is difficult, especially for those large precise instruments sensitive to vibrations. Here, a rotation structure of "3+1" axes, with a three-axis turntable manipulating attitudes of the target and the sensor revolving around a single axis, is utilized to emulate every possible illumination condition in space, which can also avoid the inconvenience of moving large aparatus. Firstly, the source-target-sensor orientation of a real satellite was analyzed with vectors and coordinate systems built to illustrate their spatial relationship. By bending the Reference Coordinate Frame to the Phase Angle plane, the sensor only need to revolve around a single axis while the other three degrees of freedom(DOF) are associated with the Euler's angles of the satellite. Then according to practical engineering requirements, an integrated rotation system of four-axis structure is brought forward. Schemetic diagrams of the three-axis turntable and other equipments show an overview of the future laboratory layout. Finally, proposals on evironment arrangements, light source precautions and sensor selections are provided. Comparing to current methods, this design shows better effects on device simplication, automatic control and high-precision measurement.

  12. An International Perspective on Pharmacy Student Selection Policies and Processes

    PubMed Central

    Kennedy, Julia; Jensen, Maree; Sheridan, Janie

    2015-01-01

    Objective. To reflect on selection policies and procedures for programs at pharmacy schools that are members of an international alliance of universities (Universitas 21). Methods. A questionnaire on selection policies and procedures was distributed to admissions directors at participating schools. Results. Completed questionnaires were received from 7 schools in 6 countries. Although marked differences were noted in the programs in different countries, there were commonalities in the selection processes. There was an emphasis on previous academic performance, especially in science subjects. With one exception, all schools had some form of interview, with several having moved to multiple mini-interviews in recent years. Conclusion. The majority of pharmacy schools in this survey relied on traditional selection processes. While there was increasing use of multiple mini-interviews, the authors suggest that additional new approaches may be required in light of the changing nature of the profession. PMID:26689381

  13. Drogue detection for vision-based autonomous aerial refueling via low rank and sparse decomposition with multiple features

    NASA Astrophysics Data System (ADS)

    Gao, Shibo; Cheng, Yongmei; Song, Chunhua

    2013-09-01

    The technology of vision-based probe-and-drogue autonomous aerial refueling is an amazing task in modern aviation for both manned and unmanned aircraft. A key issue is to determine the relative orientation and position of the drogue and the probe accurately for relative navigation system during the approach phase, which requires locating the drogue precisely. Drogue detection is a challenging task due to disorderly motion of drogue caused by both the tanker wake vortex and atmospheric turbulence. In this paper, the problem of drogue detection is considered as a problem of moving object detection. A drogue detection algorithm based on low rank and sparse decomposition with local multiple features is proposed. The global and local information of drogue is introduced into the detection model in a unified way. The experimental results on real autonomous aerial refueling videos show that the proposed drogue detection algorithm is effective.

  14. Capacity for Visual Features in Mental Rotation.

    PubMed

    Xu, Yangqing; Franconeri, Steven L

    2015-08-01

    Although mental rotation is a core component of scientific reasoning, little is known about its underlying mechanisms. For instance, how much visual information can someone rotate at once? We asked participants to rotate a simple multipart shape, requiring them to maintain attachments between features and moving parts. The capacity of this aspect of mental rotation was strikingly low: Only one feature could remain attached to one part. Behavioral and eye-tracking data showed that this single feature remained "glued" via a singular focus of attention, typically on the object's top. We argue that the architecture of the human visual system is not suited for keeping multiple features attached to multiple parts during mental rotation. Such measurement of capacity limits may prove to be a critical step in dissecting the suite of visuospatial tools involved in mental rotation, leading to insights for improvement of pedagogy in science-education contexts. © The Author(s) 2015.

  15. Multi-layer Cortical Ca2+ Imaging in Freely Moving Mice with Prism Probes and Miniaturized Fluorescence Microscopy

    PubMed Central

    Gulati, Srishti; Cao, Vania Y.; Otte, Stephani

    2017-01-01

    In vivo circuit and cellular level functional imaging is a critical tool for understanding the brain in action. High resolution imaging of mouse cortical neurons with two-photon microscopy has provided unique insights into cortical structure, function and plasticity. However, these studies are limited to head fixed animals, greatly reducing the behavioral complexity available for study. In this paper, we describe a procedure for performing chronic fluorescence microscopy with cellular-resolution across multiple cortical layers in freely behaving mice. We used an integrated miniaturized fluorescence microscope paired with an implanted prism probe to simultaneously visualize and record the calcium dynamics of hundreds of neurons across multiple layers of the somatosensory cortex as the mouse engaged in a novel object exploration task, over several days. This technique can be adapted to other brain regions in different animal species for other behavioral paradigms. PMID:28654056

  16. An International Perspective on Pharmacy Student Selection Policies and Processes.

    PubMed

    Shaw, John; Kennedy, Julia; Jensen, Maree; Sheridan, Janie

    2015-10-25

    Objective. To reflect on selection policies and procedures for programs at pharmacy schools that are members of an international alliance of universities (Universitas 21). Methods. A questionnaire on selection policies and procedures was distributed to admissions directors at participating schools. Results. Completed questionnaires were received from 7 schools in 6 countries. Although marked differences were noted in the programs in different countries, there were commonalities in the selection processes. There was an emphasis on previous academic performance, especially in science subjects. With one exception, all schools had some form of interview, with several having moved to multiple mini-interviews in recent years. Conclusion. The majority of pharmacy schools in this survey relied on traditional selection processes. While there was increasing use of multiple mini-interviews, the authors suggest that additional new approaches may be required in light of the changing nature of the profession.

  17. Optimizing a neural network for detection of moving vehicles in video

    NASA Astrophysics Data System (ADS)

    Fischer, Noëlle M.; Kruithof, Maarten C.; Bouma, Henri

    2017-10-01

    In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing, trafficking in human beings and smuggling. Many recent benchmarks have shown that convolutional neural networks are performing well in the detection of objects in images. Most deep-learning research effort focuses on classification or detection on single images. However, the detection of dynamic changes (e.g., moving objects, actions and events) in streaming video is extremely relevant for surveillance and forensic applications. In this paper, we combine an end-to-end feedforward neural network for static detection with a recurrent Long Short-Term Memory (LSTM) network for multi-frame analysis. We present a practical guide with special attention to the selection of the optimizer and batch size. The end-to-end network is able to localize and recognize the vehicles in video from traffic cameras. We show an efficient way to collect relevant in-domain data for training with minimal manual labor. Our results show that the combination with LSTM improves performance for the detection of moving vehicles.

  18. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  19. Lateralized Effects of Categorical and Coordinate Spatial Processing of Component Parts on the Recognition of 3D Non-Nameable Objects

    ERIC Educational Resources Information Center

    Saneyoshi, Ayako; Michimata, Chikashi

    2009-01-01

    Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to…

  20. Normalization of neuronal responses in cortical area MT across signal strengths and motion directions

    PubMed Central

    Xiao, Jianbo; Niu, Yu-Qiong; Wiesner, Steven

    2014-01-01

    Multiple visual stimuli are common in natural scenes, yet it remains unclear how multiple stimuli interact to influence neuronal responses. We investigated this question by manipulating relative signal strengths of two stimuli moving simultaneously within the receptive fields (RFs) of neurons in the extrastriate middle temporal (MT) cortex. Visual stimuli were overlapping random-dot patterns moving in two directions separated by 90°. We first varied the motion coherence of each random-dot pattern and characterized, across the direction tuning curve, the relationship between neuronal responses elicited by bidirectional stimuli and by the constituent motion components. The tuning curve for bidirectional stimuli showed response normalization and can be accounted for by a weighted sum of the responses to the motion components. Allowing nonlinear, multiplicative interaction between the two component responses significantly improved the data fit for some neurons, and the interaction mainly had a suppressive effect on the neuronal response. The weighting of the component responses was not fixed but dependent on relative signal strengths. When two stimulus components moved at different coherence levels, the response weight for the higher-coherence component was significantly greater than that for the lower-coherence component. We also varied relative luminance levels of two coherently moving stimuli and found that MT response weight for the higher-luminance component was also greater. These results suggest that competition between multiple stimuli within a neuron's RF depends on relative signal strengths of the stimuli and that multiplicative nonlinearity may play an important role in shaping the response tuning for multiple stimuli. PMID:24899674

  1. Multiple-camera/motion stereoscopy for range estimation in helicopter flight

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.

    1993-01-01

    Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.

  2. "Up" or "down" that makes the difference. How giant honeybees (Apis dorsata) see the world.

    PubMed

    Koeniger, Nikolaus; Kurze, Christoph; Phiancharoen, Mananya; Koeniger, Gudrun

    2017-01-01

    A. dorsata builds its large exposed comb high in trees or under ledges of high rocks. The "open" nest of A. dorsata, shielded (only!) by multiple layers of bees, is highly vulnerable to any kind of direct contact or close range attacks from predators. Therefore, guard bees of the outer layer of A. dorsata's nest monitor the vicinity for possible hazards and an effective risk assessment is required. Guard bees, however, are frequently exposed to different objects like leaves, twigs and other tree litter passing the nest from above and falling to the ground. Thus, downward movement of objects past the nest might be used by A. dorsata to classify these visual stimuli near the nest as "harmless". To test the effect of movement direction on defensive responses, we used circular black discs that were moved down or up in front of colonies and recorded the number of guard bees flying towards the disc. The size of the disc (diameter from 8 cm to 50 cm) had an effect on the number of guard bees responding, the bigger the plate the more bees started from the nest. The direction of a disc's movement had a dramatic effect on the attraction. We found a significantly higher number of attacks, when discs were moved upwards compared to downward movements (GLMM (estimate ± s.e.) 1.872 ± 0.149, P < 0.001). Our results demonstrate for the first time that the vertical direction of movement of an object can be important for releasing defensive behaviour. Upward movement of dark objects near the colony might be an innate releaser of attack flights. At the same time, downward movement is perceived as a "harmless" stimulus.

  3. From gesture to sign language: conventionalization of classifier constructions by adult hearing learners of British Sign Language.

    PubMed

    Marshall, Chloë R; Morgan, Gary

    2015-01-01

    There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages. Copyright © 2014 Cognitive Science Society, Inc.

  4. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units

    PubMed Central

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-01-01

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684

  5. A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units.

    PubMed

    Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik

    2017-02-12

    This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.

  6. Beyond Group: Multiple Person Tracking via Minimal Topology-Energy-Variation.

    PubMed

    Gao, Shan; Ye, Qixiang; Xing, Junliang; Kuijper, Arjan; Han, Zhenjun; Jiao, Jianbin; Ji, Xiangyang

    2017-12-01

    Tracking multiple persons is a challenging task when persons move in groups and occlude each other. Existing group-based methods have extensively investigated how to make group division more accurately in a tracking-by-detection framework; however, few of them quantify the group dynamics from the perspective of targets' spatial topology or consider the group in a dynamic view. Inspired by the sociological properties of pedestrians, we propose a novel socio-topology model with a topology-energy function to factor the group dynamics of moving persons and groups. In this model, minimizing the topology-energy-variance in a two-level energy form is expected to produce smooth topology transitions, stable group tracking, and accurate target association. To search for the strong minimum in energy variation, we design the discrete group-tracklet jump moves embedded in the gradient descent method, which ensures that the moves reduce the energy variation of group and trajectory alternately in the varying topology dimension. Experimental results on both RGB and RGB-D data sets show the superiority of our proposed model for multiple person tracking in crowd scenes.

  7. Interaction of compass sensing and object-motion detection in the locust central complex.

    PubMed

    Bockhorst, Tobias; Homberg, Uwe

    2017-07-01

    Goal-directed behavior is often complicated by unpredictable events, such as the appearance of a predator during directed locomotion. This situation requires adaptive responses like evasive maneuvers followed by subsequent reorientation and course correction. Here we study the possible neural underpinnings of such a situation in an insect, the desert locust. As in other insects, its sense of spatial orientation strongly relies on the central complex, a group of midline brain neuropils. The central complex houses sky compass cells that signal the polarization plane of skylight and thus indicate the animal's steering direction relative to the sun. Most of these cells additionally respond to small moving objects that drive fast sensory-motor circuits for escape. Here we investigate how the presentation of a moving object influences activity of the neurons during compass signaling. Cells responded in one of two ways: in some neurons, responses to the moving object were simply added to the compass response that had adapted during continuous stimulation by stationary polarized light. By contrast, other neurons disadapted, i.e., regained their full compass response to polarized light, when a moving object was presented. We propose that the latter case could help to prepare for reorientation of the animal after escape. A neuronal network based on central-complex architecture can explain both responses by slight changes in the dynamics and amplitudes of adaptation to polarized light in CL columnar input neurons of the system. NEW & NOTEWORTHY Neurons of the central complex in several insects signal compass directions through sensitivity to the sky polarization pattern. In locusts, these neurons also respond to moving objects. We show here that during polarized-light presentation, responses to moving objects override their compass signaling or restore adapted inhibitory as well as excitatory compass responses. A network model is presented to explain the variations of these responses that likely serve to redirect flight or walking following evasive maneuvers. Copyright © 2017 the American Physiological Society.

  8. Apparent motion perception in lower limb amputees with phantom sensations: "obstacle shunning" and "obstacle tolerance".

    PubMed

    Saetta, Gianluca; Grond, Ilva; Brugger, Peter; Lenggenhager, Bigna; Tsay, Anthony J; Giummarra, Melita J

    2018-03-21

    Phantom limbs are the phenomenal persistence of postural and sensorimotor features of an amputated limb. Although immaterial, their characteristics can be modulated by the presence of physical matter. For instance, the phantom may disappear when its phenomenal space is invaded by objects ("obstacle shunning"). Alternatively, "obstacle tolerance" occurs when the phantom is not limited by the law of impenetrability and co-exists with physical objects. Here we examined the link between this under-investigated aspect of phantom limbs and apparent motion perception. The illusion of apparent motion of human limbs involves the perception that a limb moves through or around an object, depending on the stimulus onset asynchrony (SOA) for the two images. Participants included 12 unilateral lower limb amputees matched for obstacle shunning (n = 6) and obstacle tolerance (n = 6) experiences, and 14 non-amputees. Using multilevel linear models, we replicated robust biases for short perceived trajectories for short SOA (moving through the object), and long trajectories (circumventing the object) for long SOAs in both groups. Importantly, however, amputees with obstacle shunning perceived leg stimuli to predominantly move through the object, whereas amputees with obstacle tolerance perceived leg stimuli to predominantly move around the object. That is, in people who experience obstacle shunning, apparent motion perception of lower limbs was not constrained to the laws of impenetrability (as the phantom disappears when invaded by objects), and legs can therefore move through physical objects. Amputees who experience obstacle tolerance, however, had stronger solidity constraints for lower limb apparent motion, perhaps because they must avoid co-location of the phantom with physical objects. Phantom limb experience does, therefore, appear to be modulated by intuitive physics, but not in the same way for everyone. This may have important implications for limb experience post-amputation (e.g., improving prosthesis embodiment when limb representation is constrained by the same limits as an intact limb). Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. What are the underlying units of perceived animacy? Chasing detection is intrinsically object-based.

    PubMed

    van Buren, Benjamin; Gao, Tao; Scholl, Brian J

    2017-10-01

    One of the most foundational questions that can be asked about any visual process is the nature of the underlying 'units' over which it operates (e.g., features, objects, or spatial regions). Here we address this question-for the first time, to our knowledge-in the context of the perception of animacy. Even simple geometric shapes appear animate when they move in certain ways. Do such percepts arise whenever any visual feature moves appropriately, or do they require that the relevant features first be individuated as discrete objects? Observers viewed displays in which one disc (the "wolf") chased another (the "sheep") among several moving distractor discs. Critically, two pairs of discs were also connected by visible lines. In the Unconnected condition, both lines connected pairs of distractors; but in the Connected condition, one connected the wolf to a distractor, and the other connected the sheep to a different distractor. Observers in the Connected condition were much less likely to describe such displays using mental state terms. Furthermore, signal detection analyses were used to explore the objective ability to discriminate chasing displays from inanimate control displays in which the wolf moved toward the sheep's mirror-image. Chasing detection was severely impaired on Connected trials: observers could readily detect an object chasing another object, but not a line-end chasing another line-end, a line-end chasing an object, or an object chasing a line-end. We conclude that the underlying units of perceived animacy are discrete visual objects.

  10. Moving object detection in top-view aerial videos improved by image stacking

    NASA Astrophysics Data System (ADS)

    Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen

    2017-08-01

    Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.

  11. Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study

    NASA Astrophysics Data System (ADS)

    Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad

    2018-01-01

    The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.

  12. Camouflage, detection and identification of moving targets

    PubMed Central

    Hall, Joanna R.; Cuthill, Innes C.; Baddeley, Roland; Shohet, Adam J.; Scott-Samuel, Nicholas E.

    2013-01-01

    Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation—detection, identification and capture—in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely ‘break’ camouflage. PMID:23486439

  13. Camouflage, detection and identification of moving targets.

    PubMed

    Hall, Joanna R; Cuthill, Innes C; Baddeley, Roland; Shohet, Adam J; Scott-Samuel, Nicholas E

    2013-05-07

    Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation-detection, identification and capture-in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely 'break' camouflage.

  14. Linear encoding device

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    1993-01-01

    A Linear Motion Encoding device for measuring the linear motion of a moving object is disclosed in which a light source is mounted on the moving object and a position sensitive detector such as an array photodetector is mounted on a nearby stationary object. The light source emits a light beam directed towards the array photodetector such that a light spot is created on the array. An analog-to-digital converter, connected to the array photodetector is used for reading the position of the spot on the array photodetector. A microprocessor and memory is connected to the analog-to-digital converter to hold and manipulate data provided by the analog-to-digital converter on the position of the spot and to compute the linear displacement of the moving object based upon the data from the analog-to-digital converter.

  15. Try This: Moving Toys

    ERIC Educational Resources Information Center

    Preston, Christine

    2018-01-01

    If you think physics is only for older children, think again. Much of the playtime of young children is filled with exploring--and wondering about and informally investigating--the way objects, especially toys, move. How forces affect objects, including: change in position, motion, and shape are fundamental to the big ideas in physics. This…

  16. Let It Roll

    ERIC Educational Resources Information Center

    Trundle, Kathy Cabe; Smith, Mandy McCormick

    2011-01-01

    Some of children's earliest explorations focus on movement of their own bodies. Quickly, children learn to further explore movement by using objects like a ball or car. They recognize that a ball moves differently than a pushed block. As they grow, children enjoy their experiences with motion and movement, including making objects move, changing…

  17. An elementary research on wireless transmission of holographic 3D moving pictures

    NASA Astrophysics Data System (ADS)

    Takano, Kunihiko; Sato, Koki; Endo, Takaya; Asano, Hiroaki; Fukuzawa, Atsuo; Asai, Kikuo

    2009-05-01

    In this paper, a transmitting process of a sequence of holograms describing 3D moving objects over the communicating wireless-network system is presented. A sequence of holograms involves holograms is transformed into a bit stream data, and then it is transmitted over the wireless LAN and Bluetooth. It is shown that applying this technique, holographic data of 3D moving object is transmitted in high quality and a relatively good reconstruction of holographic images is performed.

  18. Multiple operating system rotation environment moving target defense

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Nathaniel; Thompson, Michael

    Systems and methods for providing a multiple operating system rotation environment ("MORE") moving target defense ("MTD") computing system are described. The MORE-MTD system provides enhanced computer system security through a rotation of multiple operating systems. The MORE-MTD system increases attacker uncertainty, increases the cost of attacking the system, reduces the likelihood of an attacker locating a vulnerability, and reduces the exposure time of any located vulnerability. The MORE-MTD environment is effectuated by rotation of the operating systems at a given interval. The rotating operating systems create a consistently changing attack surface for remote attackers.

  19. Color quality improvement of reconstructed images in color digital holography using speckle method and spectral estimation

    NASA Astrophysics Data System (ADS)

    Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa

    2018-05-01

    In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.

  20. Flexible Skins Containing Integrated Sensors and Circuitry

    NASA Technical Reports Server (NTRS)

    Liu, Chang

    2007-01-01

    Artificial sensor skins modeled partly in imitation of biological sensor skins are undergoing development. These sensor skins comprise flexible polymer substrates that contain and/or support dense one- and two-dimensional arrays of microscopic sensors and associated microelectronic circuits. They afford multiple tactile sensing modalities for measuring physical phenomena that can include contact forces; hardnesses, temperatures, and thermal conductivities of objects with which they are in contact; and pressures, shear stresses, and flow velocities in fluids. The sensor skins are mechanically robust, and, because of their flexibility, they can be readily attached to curved and possibly moving and flexing surfaces of robots, wind-tunnel models, and other objects that one might seek to equip for tactile sensing. Because of the diversity of actual and potential sensor-skin design criteria and designs and the complexity of the fabrication processes needed to realize the designs, it is not possible to describe the sensor-skin concept in detail within this article.

  1. Perceived shifts of flashed stimuli by visible and invisible object motion.

    PubMed

    Watanabe, Katsumi; Sato, Takashi R; Shimojo, Shinsuke

    2003-01-01

    Perceived positions of flashed stimuli can be altered by motion signals in the visual field-position capture (Whitney and Cavanagh, 2000 Nature Neuroscience 3 954-959). We examined whether position capture of flashed stimuli depends on the spatial relationship between moving and flashed stimuli, and whether the phenomenal permanence of a moving object behind an occluding surface (tunnel effect; Michotte 1950 Acta Psychologica 7 293-322) can produce position capture. Observers saw two objects (circles) moving vertically in opposite directions, one in each visual hemifield. Two horizontal bars were simultaneously flashed at horizontally collinear positions with the fixation point at various timings. When the movement of the object was fully visible, the flashed bar appeared shifted in the motion direction of the circle. But this position-capture effect occurred only when the bar was presented ahead of or on the moving circle. Even when the motion trajectory was covered by an opaque surface and the bar was flashed after complete occlusion of the circle, the position-capture effect was still observed, though the positional asymmetry was less clear. These results show that movements of both visible and 'hidden' objects can modulate the perception of positions of flashed stimuli and suggest that a high-level representation of 'objects in motion' plays an important role in the position-capture effect.

  2. Optic probe for multiple angle image capture and optional stereo imaging

    DOEpatents

    Malone, Robert M.; Kaufman, Morris I.

    2016-11-29

    A probe including a multiple lens array is disclosed to measure velocity distribution of a moving surface along many lines of sight. Laser light, directed to the moving surface is reflected back from the surface and is Doppler shifted, collected into the array, and then directed to detection equipment through optic fibers. The received light is mixed with reference laser light and using photonic Doppler velocimetry, a continuous time record of the surface movement is obtained. An array of single-mode optical fibers provides an optic signal to the multiple lens array. Numerous fibers in a fiber array project numerous rays to establish many measurement points at numerous different locations. One or more lens groups may be replaced with imaging lenses so a stereo image of the moving surface can be recorded. Imaging a portion of the surface during initial travel can determine whether the surface is breaking up.

  3. Exploiting Satellite Focal Plane Geometry for Automatic Extraction of Traffic Flow from Single Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Krauß, T.

    2014-11-01

    The focal plane assembly of most pushbroom scanner satellites is built up in a way that different multispectral or multispectral and panchromatic bands are not all acquired exactly at the same time. This effect is due to offsets of some millimeters of the CCD-lines in the focal plane. Exploiting this special configuration allows the detection of objects moving during this small time span. In this paper we present a method for automatic detection and extraction of moving objects - mainly traffic - from single very high resolution optical satellite imagery of different sensors. The sensors investigated are WorldView-2, RapidEye, Pléiades and also the new SkyBox satellites. Different sensors require different approaches for detecting moving objects. Since the objects are mapped on different positions only in different spectral bands also the change of spectral properties have to be taken into account. In case the main distance in the focal plane is between the multispectral and the panchromatic CCD-line like for Pléiades an approach for weighted integration to receive mostly identical images is investigated. Other approaches for RapidEye and WorldView-2 are also shown. From these intermediate bands difference images are calculated and a method for detecting the moving objects from these difference images is proposed. Based on these presented methods images from different sensors are processed and the results are assessed for detection quality - how many moving objects can be detected, how many are missed - and accuracy - how accurate is the derived speed and size of the objects. Finally the results are discussed and an outlook for possible improvements towards operational processing is presented.

  4. Sit less and move more: perspectives of adults with multiple sclerosis.

    PubMed

    Aminian, Saeideh; Ezeugwu, Victor E; Motl, Robert W; Manns, Patricia J

    2017-12-20

    Multiple sclerosis is a chronic neurological disease with the highest prevalence in Canada. Replacing sedentary behavior with light activities may be a feasible approach to manage multiple sclerosis symptoms. This study explored the perspectives of adults with multiple sclerosis about sedentary behavior, physical activity and ways to change behavior. Fifteen adults with multiple sclerosis (age 43 ± 13 years; mean ± standard deviation), recruited through the multiple sclerosis Clinic at the University of Alberta, Edmonton, Canada, participated in semi-structured interviews. Interview audios were transcribed verbatim and coded. NVivo software was used to facilitate the inductive process of thematic analysis. Balancing competing priorities between sitting and moving was the primary theme. Participants were aware of the benefits of physical activity to their overall health, and in the management of fatigue and muscle stiffness. Due to fatigue, they often chose sitting to get their energy back. Further, some barriers included perceived fear of losing balance or embarrassment while walking. Activity monitoring, accountability, educational and individualized programs were suggested strategies to motivate more movement. Adults with multiple sclerosis were open to the idea of replacing sitting with light activities. Motivational and educational programs are required to help them to change sedentary behavior to moving more. IMPLICATIONS FOR REHABILITATION One of the most challenging and common difficulties of multiple sclerosis is walking impairment that worsens because of multiple sclerosis progression, and is a common goal in the rehabilitation of people with multiple sclerosis. The deterioration in walking abilities is related to lower levels of physical activity and more sedentary behavior, such that adults with multiple sclerosis spend 8 to 10.5 h per day sitting. Replacing prolonged sedentary behavior with light physical activities, and incorporating education, encouragement, and self-monitoring strategies are feasible approaches to manage the symptoms of multiple sclerosis.

  5. An analysis of shoot and scoot tactics

    DTIC Science & Technology

    2017-03-01

    firing multiple shots in the same location is preferable to moving immediately after firing one shot . Moving frequently reduces risk to artillery, but...preferable to moving immediately after firing one shot . Moving frequently reduces risk to artillery, but limits the artillery’s ability to inflict damage... study here. Thanks to his mistake (it might not be), I have completed a very tough matrix of 71 credits (56 grad level credits) in only one year. I

  6. Flash-lag effect: complicating motion extrapolation of the moving reference-stimulus paradoxically augments the effect.

    PubMed

    Bachmann, Talis; Murd, Carolina; Põder, Endel

    2012-09-01

    One fundamental property of the perceptual and cognitive systems is their capacity for prediction in the dynamic environment; the flash-lag effect has been considered as a particularly suggestive example of this capacity (Nijhawan in nature 370:256-257, 1994, Behav brain sci 31:179-239, 2008). Thus, because of involvement of the mechanisms of extrapolation and visual prediction, the moving object is perceived ahead of the simultaneously flashed static object objectively aligned with the moving one. In the present study we introduce a new method and report experimental results inconsistent with at least some versions of the prediction/extrapolation theory. We show that a stimulus moving in the opposite direction to the reference stimulus by approaching it before the flash does not diminish the flash-lag effect, but rather augments it. In addition, alternative theories (in)capable of explaining this paradoxical result are discussed.

  7. Alternatives to the Moving Average

    Treesearch

    Paul C. van Deusen

    2001-01-01

    There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...

  8. Nonrigid iterative closest points for registration of 3D biomedical surfaces

    NASA Astrophysics Data System (ADS)

    Liang, Luming; Wei, Mingqiang; Szymczak, Andrzej; Petrella, Anthony; Xie, Haoran; Qin, Jing; Wang, Jun; Wang, Fu Lee

    2018-01-01

    Advanced 3D optical and laser scanners bring new challenges to computer graphics. We present a novel nonrigid surface registration algorithm based on Iterative Closest Point (ICP) method with multiple correspondences. Our method, called the Nonrigid Iterative Closest Points (NICPs), can be applied to surfaces of arbitrary topology. It does not impose any restrictions on the deformation, e.g. rigidity or articulation. Finally, it does not require parametrization of input meshes. Our method is based on an objective function that combines distance and regularization terms. Unlike the standard ICP, the distance term is determined based on multiple two-way correspondences rather than single one-way correspondences between surfaces. A Laplacian-based regularization term is proposed to take full advantage of multiple two-way correspondences. This term regularizes the surface movement by enforcing vertices to move coherently with their 1-ring neighbors. The proposed method achieves good performances when no global pose differences or significant amount of bending exists in the models, for example, families of similar shapes, like human femur and vertebrae models.

  9. Moving Data, Moving Students: Involving Students in Learning about Internet Data Traffic

    ERIC Educational Resources Information Center

    Reinicke, Bryan A.; Yaylacicegi, Ulku

    2010-01-01

    Undergraduate students often have difficulty understanding the way in which data moves across a TCP/IP network, such as the Internet. From the initial data request, to larger files being packetized and transmitted via multiple routes, the students can become lost in the details. These are important concepts for both introductory Management…

  10. Optimal Path Determination for Flying Vehicle to Search an Object

    NASA Astrophysics Data System (ADS)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  11. Exploring Dance Movement Data Using Sequence Alignment Methods

    PubMed Central

    Chavoshi, Seyed Hossein; De Baets, Bernard; Neutens, Tijs; De Tré, Guy; Van de Weghe, Nico

    2015-01-01

    Despite the abundance of research on knowledge discovery from moving object databases, only a limited number of studies have examined the interaction between moving point objects in space over time. This paper describes a novel approach for measuring similarity in the interaction between moving objects. The proposed approach consists of three steps. First, we transform movement data into sequences of successive qualitative relations based on the Qualitative Trajectory Calculus (QTC). Second, sequence alignment methods are applied to measure the similarity between movement sequences. Finally, movement sequences are grouped based on similarity by means of an agglomerative hierarchical clustering method. The applicability of this approach is tested using movement data from samba and tango dancers. PMID:26181435

  12. The Straight-Down Belief.

    ERIC Educational Resources Information Center

    McCloskey, Michael; And Others

    Through everyday experience people acquire knowledge about how moving objects behave. For example, if a rock is thrown up into the air, it will fall back to earth. Research has shown that people's ideas about why moving objects behave as they do are often quite inconsistent with the principles of classical mechanics. In fact, many people hold a…

  13. Origins of Newton's First Law

    ERIC Educational Resources Information Center

    Hecht, Eugene

    2015-01-01

    Anyone who has taught introductory physics should know that roughly a third of the students initially believe that any object at rest will remain at rest, whereas any moving body not propelled by applied forces will promptly come to rest. Likewise, about half of those uninitiated students believe that any object moving at a constant speed must be…

  14. I saw where you have been--The topography of human demonstration affects dogs' search patterns and perseverative errors.

    PubMed

    Péter, András; Topál, József; Miklósi, Ádám; Pongrácz, Péter

    2016-04-01

    Performance in object search tasks is not only influenced by the subjects' object permanence ability. For example, ostensive cues of the human manipulating the target markedly affect dogs' choices. However, the interference between the target's location and the spatial cues of the human hiding the object is still unknown. In a five-location visible displacement task, the experimental groups differed in the hiding route of the experimenter. In the 'direct' condition he moved straight towards the actual location, hid the object and returned to the dog. In the 'indirect' conditions, he additionally walked behind each screen before returning. The two 'indirect' conditions differed from each other in that the human either visited the previously baited locations before (proactive interference) or after (retroactive interference) hiding the object. In the 'indirect' groups, dogs' performance was significantly lower than in the 'direct' group, demonstrating that for dogs, in an ostensive context, spatial cues of the hider are as important as the observed location of the target. Based on their incorrect choices, dogs were most attracted to the previously baited locations that the human visited after hiding the object in the actual trial. This underlines the importance of retroactive interference in multiple choice tasks. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Infants' use of category knowledge and object attributes when segregating objects at 8.5 months of age.

    PubMed

    Needham, Amy; Cantlon, Jessica F; Ormsbee Holley, Susan M

    2006-12-01

    The current research investigates infants' perception of a novel object from a category that is familiar to young infants: key rings. We ask whether experiences obtained outside the lab would allow young infants to parse the visible portions of a partly occluded key ring display into one single unit, presumably as a result of having categorized it as a key ring. This categorization was marked by infants' perception of the keys and ring as a single unit that should move together, despite their attribute differences. We showed infants a novel key ring display in which the keys and ring moved together as one rigid unit (Move-together event) or the ring moved but the keys remained stationary throughout the event (Move-apart event). Our results showed that 8.5-month-old infants perceived the keys and ring as connected despite their attribute differences, and that their perception of object unity was eliminated as the distinctive attributes of the key ring were removed. When all of the distinctive attributes of the key ring were removed, the 8.5-month-old infants perceived the display as two separate units, which is how younger infants (7-month-old) perceived the key ring display with all its distinctive attributes unaltered. These results suggest that on the basis of extensive experience with an object category, infants come to identify novel members of that category and expect them to possess the attributes typical of that category.

  16. Velocity measurement by vibro-acoustic Doppler.

    PubMed

    Nabavizadeh, Alireza; Urban, Matthew W; Kinnick, Randall R; Fatemi, Mostafa

    2012-04-01

    We describe the theoretical principles of a new Doppler method, which uses the acoustic response of a moving object to a highly localized dynamic radiation force of the ultrasound field to calculate the velocity of the moving object according to Doppler frequency shift. This method, named vibro-acoustic Doppler (VAD), employs two ultrasound beams separated by a slight frequency difference, Δf, transmitting in an X-focal configuration. Both ultrasound beams experience a frequency shift because of the moving objects and their interaction at the joint focal zone produces an acoustic frequency shift occurring around the low-frequency (Δf) acoustic emission signal. The acoustic emission field resulting from the vibration of the moving object is detected and used to calculate its velocity. We report the formula that describes the relation between Doppler frequency shift of the emitted acoustic field and the velocity of the moving object. To verify the theory, we used a string phantom. We also tested our method by measuring fluid velocity in a tube. The results show that the error calculated for both string and fluid velocities is less than 9.1%. Our theory shows that in the worst case, the error is 0.54% for a 25° angle variation for the VAD method compared with an error of -82.6% for a 25° angle variation for a conventional continuous wave Doppler method. An advantage of this method is that, unlike conventional Doppler, it is not sensitive to angles between the ultrasound beams and direction of motion.

  17. School-based systems change for obesity prevention in adolescents: outcomes of the Australian Capital Territory 'It's Your Move!'

    PubMed

    Malakellis, Mary; Hoare, Erin; Sanigorski, Andrew; Crooks, Nicholas; Allender, Steven; Nichols, Melanie; Swinburn, Boyd; Chikwendu, Cal; Kelly, Paul M; Petersen, Solveig; Millar, Lynne

    2017-10-01

    The Australian Capital Territory 'It's Your Move!' (ACT-IYM) was a three-year (2012-2014) systems intervention to prevent obesity among adolescents. The ACT-IYM project involved three intervention schools and three comparison schools and targeted secondary students aged 12-16 years. The intervention consisted of multiple initiatives at individual, community, and school policy level to support healthier nutrition and physical activity. Intervention school-specific objectives related to increasing active transport, increasing time spent physically active at school, and supporting mental wellbeing. Data were collected in 2012 and 2014 from 656 students. Anthropometric data were objectively measured and behavioural data self-reported. Proportions of overweight or obesity were similar over time within the intervention (24.5% baseline and 22.8% follow-up) and comparison groups (31.8% baseline and 30.6% follow-up). Within schools, two of three the intervention schools showed a significant decrease in the prevalence of overweight and obesity (p<0.05). There was some evidence of effectiveness of the systems approach to preventing obesity among adolescents. Implications for public health: The incorporation of systems thinking has been touted as the next stage in obesity prevention and public health more broadly. These findings demonstrate that the use of systems methods can be effective on a small scale. © 2017 The Authors.

  18. Subjective evaluation of HEVC in mobile devices

    NASA Astrophysics Data System (ADS)

    Garcia, Ray; Kalva, Hari

    2013-03-01

    Mobile compute environments provide a unique set of user needs and expectations that designers must consider. With increased multimedia use in mobile environments, video encoding methods within the smart phone market segment are key factors that contribute to positive user experience. Currently available display resolutions and expected cellular bandwidth are major factors the designer must consider when determining which encoding methods should be supported. The desired goal is to maximize the consumer experience, reduce cost, and reduce time to market. This paper presents a comparative evaluation of the quality of user experience when HEVC and AVC/H.264 video coding standards were used. The goal of the study was to evaluate any improvements in user experience when using HEVC. Subjective comparisons were made between H.264/AVC and HEVC encoding standards in accordance with Doublestimulus impairment scale (DSIS) as defined by ITU-R BT.500-13. Test environments are based on smart phone LCD resolutions and expected cellular bit rates, such as 200kbps and 400kbps. Subjective feedback shows both encoding methods are adequate at 400kbps constant bit rate. However, a noticeable consumer experience gap was observed for 200 kbps. Significantly less H.264 subjective quality is noticed with video sequences that have multiple objects moving and no single point of visual attraction. Video sequences with single points of visual attraction or few moving objects tended to have higher H.264 subjective quality.

  19. Optimization of reactive simulated moving bed systems with modulation of feed concentration for production of glycol ether ester.

    PubMed

    Agrawal, Gaurav; Oh, Jungmin; Sreedhar, Balamurali; Tie, Shan; Donaldson, Megan E; Frank, Timothy C; Schultz, Alfred K; Bommarius, Andreas S; Kawajiri, Yoshiaki

    2014-09-19

    In this article, we extend the simulated moving bed reactor (SMBR) mode of operation to the production of propylene glycol methyl ether acetate (DOWANOL™ PMA glycol ether) through the esterification of 1-methoxy-2-propanol (DOWANOL™ PM glycol ether) and acetic acid using AMBERLYST™ 15 as a catalyst and adsorbent. In addition, for the first time, we integrate the concept of modulation of the feed concentration (ModiCon) to SMBR operation. The performance of the conventional (constant feed) and ModiCon operation modes of SMBR are analyzed and compared. The SMBR processes are designed using a model based on a multi-objective optimization approach, where a transport dispersive model with a linear driving force for the adsorption rate has been used for modeling the SMBR system. The adsorption equilibrium and kinetics parameters are estimated from the batch and single column injection experiments by the inverse method. The multiple objectives are to maximize the production rate of DOWANOL™ PMA glycol ether, maximize the conversion of the esterification reaction and minimize the consumption of DOWANOL™ PM glycol ether which also acts as the desorbent in the chromatographic separation. It is shown that ModiCon achieves a higher productivity by 12-36% over the conventional operation with higher product purity and recovery. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Crawling and walking infants encounter objects differently in a multi-target environment.

    PubMed

    Dosso, Jill A; Boudreau, J Paul

    2014-10-01

    From birth, infants move their bodies in order to obtain information and stimulation from their environment. Exploratory movements are important for the development of an infant's understanding of the world and are well established as being key to cognitive advances. Newly acquired motor skills increase the potential actions available to the infant. However, the way that infants employ potential actions in environments with multiple potential targets is undescribed. The current work investigated the target object selections of infants across a range of self-produced locomotor experience (11- to 14-month-old crawlers and walkers). Infants repeatedly accessed objects among pairs of objects differing in both distance and preference status, some requiring locomotion. Overall, their object actions were found to be sensitive to object preference status; however, the role of object distance in shaping object encounters was moderated by movement status. Crawlers' actions appeared opportunistic and were biased towards nearby objects while walkers' actions appeared intentional and were independent of object position. Moreover, walkers' movements favoured preferred objects more strongly for children with higher levels of self-produced locomotion experience. The multi-target experimental situation used in this work parallels conditions faced by foraging organisms, and infants' behaviours were discussed with respect to optimal foraging theory. There is a complex interplay between infants' agency, locomotor experience, and environment in shaping their motor actions. Infants' movements, in turn, determine the information and experiences offered to infants by their micro-environment.

  1. Portable and cost-effective pixel super-resolution on-chip microscope for telemedicine applications.

    PubMed

    Bishara, Waheb; Sikora, Uzair; Mudanyali, Onur; Su, Ting-Wei; Yaglidere, Oguzhan; Luckhart, Shirley; Ozcan, Aydogan

    2011-01-01

    We report a field-portable lensless on-chip microscope with a lateral resolution of <1 μm and a large field-of-view of ~24 mm(2). This microscope is based on digital in-line holography and a pixel super-resolution algorithm to process multiple lensfree holograms and obtain a single high-resolution hologram. In its compact and cost-effective design, we utilize 23 light emitting diodes butt-coupled to 23 multi-mode optical fibers, and a simple optical filter, with no moving parts. Weighing only ~95 grams, we demonstrate the performance of this field-portable microscope by imaging various objects including human malaria parasites in thin blood smears.

  2. fVisiOn: 360-degree viewable glasses-free tabletop 3D display composed of conical screen and modular projector arrays.

    PubMed

    Yoshida, Shunsuke

    2016-06-13

    A novel glasses-free tabletop 3D display to float virtual objects on a flat tabletop surface is proposed. This method employs circularly arranged projectors and a conical rear-projection screen that serves as an anisotropic diffuser. Its practical implementation installs them beneath a round table and produces horizontal parallax in a circumferential direction without the use of high speed or a moving apparatus. Our prototype can display full-color, 5-cm-tall 3D characters on the table. Multiple viewers can share and enjoy its real-time animation from any angle of 360 degrees with appropriate perspectives as if the animated figures were present.

  3. The emotional effects of violations of causality, or How to make a square amusing

    PubMed Central

    Bressanelli, Daniela; Parovel, Giulia

    2012-01-01

    In Michotte's launching paradigm a square moves up to and makes contact with another square, which then moves off more slowly. In the triggering effect, the second square moves much faster than the first, eliciting an amusing impression. We generated 13 experimental displays in which there was always incongruity between cause and effect. We hypothesized that the comic impression would be stronger when objects are perceived as living agents and weaker when objects are perceived as mechanically non-animated. General findings support our hypothesis. PMID:23145274

  4. VizieR Online Data Catalog: Catalog of Suspected Nearby Young Stars (Riedel+, 2017)

    NASA Astrophysics Data System (ADS)

    Riedel, A. R.; Blunt, S. C.; Lambrides, E. L.; Rice, E. L.; Cruz, K. L.; Faherty, J. K.

    2018-04-01

    LocAting Constituent mEmbers In Nearby Groups (LACEwING) is a frequentist observation space kinematic moving group identification code. Using the spatial and kinematic information available about a target object (α, δ, Dist, μα, μδ, and γ), it determines the probability that the object is a member of each of the known nearby young moving groups (NYMGs). As with other moving group identification codes, LACEwING is capable of estimating memberships for stars with incomplete kinematic and spatial information. (2 data files).

  5. V773 Cas, QS Aql, AND BR Ind: ECLIPSING BINARIES AS PARTS OF MULTIPLE SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zasche, P.; Juryšek, J.; Nemravová, J.

    2017-01-01

    Eclipsing binaries remain crucial objects for our understanding of the universe. In particular, those that are components of multiple systems can help us solve the problem of the formation of these systems. Analysis of the radial velocities together with the light curve produced for the first time precise physical parameters of the components of the multiple systems V773 Cas, QS Aql, and BR Ind. Their visual orbits were also analyzed, which resulted in slightly improved orbital elements. What is typical for all these systems is that their most dominant source is the third distant component. The system V773 Cas consists of two similarmore » G1-2V stars revolving in a circular orbit and a more distant component of the A3V type. Additionally, the improved value of parallax was calculated to be 17.6 mas. Analysis of QS Aql resulted in the following: the inner eclipsing pair is composed of B6V and F1V stars, and the third component is of about the B6 spectral type. The outer orbit has high eccentricity of about 0.95, and observations near its upcoming periastron passage between the years 2038 and 2040 are of high importance. Also, the parallax of the system was derived to be about 2.89 mas, moving the star much closer to the Sun than originally assumed. The system BR Ind was found to be a quadruple star consisting of two eclipsing K dwarfs orbiting each other with a period of 1.786 days; the distant component is a single-lined spectroscopic binary with an orbital period of about 6 days. Both pairs are moving around each other on their 148 year orbit.« less

  6. An Automatic Technique for Finding Faint Moving Objects in Wide Field CCD Images

    NASA Astrophysics Data System (ADS)

    Hainaut, O. R.; Meech, K. J.

    1996-09-01

    The traditional method used to find moving objects in astronomical images is to blink pairs or series of frames after registering them to align the background objects. While this technique is extremely efficient in terms of the low signal-to-noise ratio that the human sight can detect, it proved to be extremely time-, brain- and eyesight-consuming. The wide-field images provided by the large CCD mosaic recently built at IfA cover a field of view of 20 to 30' over 8192(2) pixels. Blinking such images is an enormous task, comparable to that of blinking large photographic plates. However, as the data are available digitally (each image occupying 260Mb of disk space), we are developing a set of computer codes to perform the moving object identification in sets of frames. This poster will describe the techniques we use in order to reach a detection efficiency as good as that of a human blinker; the main steps are to find all the objects in each frame (for which we rely on ``S-Extractor'' (Bertin & Arnouts (1996), A&ASS 117, 393), then identify all the background objects, and finally to search the non-background objects for sources moving in a coherent fashion. We will also describe the results of this method applied to actual data from the 8k CCD mosaic. {This work is being supported, in part, by NSF grant AST 92-21318.}

  7. Acoustical-Levitation Chamber for Metallurgy

    NASA Technical Reports Server (NTRS)

    Barmatz, M. B.; Trinh, E.; Wang, T. G.; Elleman, D. D.; Jacobi, N.

    1983-01-01

    Sample moved to different positions for heating and quenching. Acoustical levitation chamber selectively excited in fundamental and second-harmonic longitudinal modes to hold sample at one of three stable postions: A, B, or C. Levitated object quickly moved from one of these positions to another by changing modes. Object rapidly quenched at A or C after heating in furnace region at B.

  8. Another Way of Tracking Moving Objects Using Short Video Clips

    ERIC Educational Resources Information Center

    Vera, Francisco; Romanque, Cristian

    2009-01-01

    Physics teachers have long employed video clips to study moving objects in their classrooms and instructional labs. A number of approaches exist, both free and commercial, for tracking the coordinates of a point using video. The main characteristics of the method described in this paper are: it is simple to use; coordinates can be tracked using…

  9. Advanced Three-Dimensional Display System

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2005-01-01

    A desktop-scale, computer-controlled display system, initially developed for NASA and now known as the VolumeViewer(TradeMark), generates three-dimensional (3D) images of 3D objects in a display volume. This system differs fundamentally from stereoscopic and holographic display systems: The images generated by this system are truly 3D in that they can be viewed from almost any angle, without the aid of special eyeglasses. It is possible to walk around the system while gazing at its display volume to see a displayed object from a changing perspective, and multiple observers standing at different positions around the display can view the object simultaneously from their individual perspectives, as though the displayed object were a real 3D object. At the time of writing this article, only partial information on the design and principle of operation of the system was available. It is known that the system includes a high-speed, silicon-backplane, ferroelectric-liquid-crystal spatial light modulator (SLM), multiple high-power lasers for projecting images in multiple colors, a rotating helix that serves as a moving screen for displaying voxels [volume cells or volume elements, in analogy to pixels (picture cells or picture elements) in two-dimensional (2D) images], and a host computer. The rotating helix and its motor drive are the only moving parts. Under control by the host computer, a stream of 2D image patterns is generated on the SLM and projected through optics onto the surface of the rotating helix. The system utilizes a parallel pixel/voxel-addressing scheme: All the pixels of the 2D pattern on the SLM are addressed simultaneously by laser beams. This parallel addressing scheme overcomes the difficulty of achieving both high resolution and a high frame rate in a raster scanning or serial addressing scheme. It has been reported that the structure of the system is simple and easy to build, that the optical design and alignment are not difficult, and that the system can be built by use of commercial off-the-shelf products. A prototype of the system displays an image of 1,024 by 768 by 170 (=133,693,440) voxels. In future designs, the resolution could be increased. The maximum number of voxels that can be generated depends upon the spatial resolution of SLM and the speed of rotation of the helix. For example, one could use an available SLM that has 1,024 by 1,024 pixels. Incidentally, this SLM is capable of operation at a switching speed of 300,000 frames per second. Implementation of full-color displays in future versions of the system would be straightforward: One could use three SLMs for red, green, and blue, respectively, and the colors of the voxels could be automatically controlled. An optically simpler alternative would be to use a single red/green/ blue light projector and synchronize the projection of each color with the generation of patterns for that color on a single SLM.

  10. Kinesin-microtubule interactions during gliding assays under magnetic force

    NASA Astrophysics Data System (ADS)

    Fallesen, Todd L.

    Conventional kinesin is a motor protein capable of converting the chemical energy of ATP into mechanical work. In the cell, this is used to actively transport vesicles through the intracellular matrix. The relationship between the velocity of a single kinesin, as it works against an increasing opposing load, has been well studied. The relationship between the velocity of a cargo being moved by multiple kinesin motors against an opposing load has not been established. A major difficulty in determining the force-velocity relationship for multiple motors is determining the number of motors that are moving a cargo against an opposing load. Here I report on a novel method for detaching microtubules bound to a superparamagnetic bead from kinesin anchor points in an upside down gliding assay using a uniform magnetic field perpendicular to the direction of microtubule travel. The anchor points are presumably kinesin motors bound to the surface which microtubules are gliding over. Determining the distance between anchor points, d, allows the calculation of the average number of kinesins, n, that are moving a microtubule. It is possible to calculate the fraction of motors able to move microtubules as well, which is determined to be ˜ 5%. Using a uniform magnetic field parallel to the direction of microtubule travel, it is possible to impart a uniform magnetic field on a microtubule bound to a superparamagnetic bead. We are able to decrease the average velocity of microtubules driven by multiple kinesin motors moving against an opposing force. Using the average number of kinesins on a microtubule, we estimate that there are an average 2-7 kinesins acting against the opposing force. By fitting Gaussians to the smoothed distributions of microtubule velocities acting against an opposing force, multiple velocities are seen, presumably for n, n-1, n-2, etc motors acting together. When these velocities are scaled for the average number of motors on a microtubule, the force-velocity relationship for multiple motors follows the same trend as for one motor, supporting the hypothesis that multiple motors share the load.

  11. Occupational injuries and sick leaves in household moving works.

    PubMed

    Hwan Park, Myoung; Jeong, Byung Yong

    2017-09-01

    This study is concerned with household moving works and the characteristics of occupational injuries and sick leaves in each step of the moving process. Accident data for 392 occupational accidents were categorized by the moving processes in which the accidents occurred, and possible incidents and sick leaves were assessed for each moving process and hazard factor. Accidents occurring during specific moving processes showed different characteristics depending on the type of accident and agency of accidents. The most critical form in the level of risk management was falls from a height in the 'lifting by ladder truck' process. Incidents ranked as a 'High' level of risk management were in the forms of slips, being struck by objects and musculoskeletal disorders in the 'manual materials handling' process. Also, falls in 'loading/unloading', being struck by objects during 'lifting by ladder truck' and driving accidents in the process of 'transport' were ranked 'High'. The findings of this study can be used to develop more effective accident prevention policy reflecting different circumstances and conditions to reduce occupational accidents in household moving works.

  12. Measuring attention using flash-lag effect.

    PubMed

    Shioiri, Satoshi; Yamamoto, Ken; Oshida, Hiroki; Matsubara, Kazuya; Yaguchi, Hirohisa

    2010-08-13

    We investigated the effect of attention on the flash-lag effect (FLE) in order to determine whether the FLE can be used to estimate the effect of visual attention. The FLE is the effect that a flash aligned with a moving object is perceived to lag the moving object, and several studies have shown that attention reduces its magnitude. We measured the FLE as a function of the number or speed of moving objects. The results showed that the effect of cueing, which we attributed the effect of attention, on the FLE increased monotonically with the number or the speed of the objects. This suggests that the amount of attention can be estimated by measuring the FLE, assuming that more amount of attention is required for a larger number or faster speed of objects to attend. On the basis of this presumption, we attempted to measure the spatial spread of visual attention by FLE measurements. The estimated spatial spreads were similar to those estimated by other experimental methods.

  13. Virtual hand: a 3D tactile interface to virtual environments

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  14. Heterodyne laser Doppler distance sensor with phase coding measuring stationary as well as laterally and axially moving objects

    NASA Astrophysics Data System (ADS)

    Pfister, T.; Günther, P.; Nöthen, M.; Czarske, J.

    2010-02-01

    Both in production engineering and process control, multidirectional displacements, deformations and vibrations of moving or rotating components have to be measured dynamically, contactlessly and with high precision. Optical sensors would be predestined for this task, but their measurement rate is often fundamentally limited. Furthermore, almost all conventional sensors measure only one measurand, i.e. either out-of-plane or in-plane distance or velocity. To solve this problem, we present a novel phase coded heterodyne laser Doppler distance sensor (PH-LDDS), which is able to determine out-of-plane (axial) position and in-plane (lateral) velocity of rough solid-state objects simultaneously and independently with a single sensor. Due to the applied heterodyne technique, stationary or purely axially moving objects can also be measured. In addition, it is shown theoretically as well as experimentally that this sensor offers concurrently high temporal resolution and high position resolution since its position uncertainty is in principle independent of the lateral object velocity in contrast to conventional distance sensors. This is a unique feature of the PH-LDDS enabling precise and dynamic position and shape measurements also of fast moving objects. With an optimized sensor setup, an average position resolution of 240 nm was obtained.

  15. Efficient Spatiotemporal Clutter Rejection and Nonlinear Filtering-based Dim Resolved and Unresolved Object Tracking Algorithms

    NASA Astrophysics Data System (ADS)

    Tartakovsky, A.; Tong, M.; Brown, A. P.; Agh, C.

    2013-09-01

    We develop efficient spatiotemporal image processing algorithms for rejection of non-stationary clutter and tracking of multiple dim objects using non-linear track-before-detect methods. For clutter suppression, we include an innovative image alignment (registration) algorithm. The images are assumed to contain elements of the same scene, but taken at different angles, from different locations, and at different times, with substantial clutter non-stationarity. These challenges are typical for space-based and surface-based IR/EO moving sensors, e.g., highly elliptical orbit or low earth orbit scenarios. The algorithm assumes that the images are related via a planar homography, also known as the projective transformation. The parameters are estimated in an iterative manner, at each step adjusting the parameter vector so as to achieve improved alignment of the images. Operating in the parameter space rather than in the coordinate space is a new idea, which makes the algorithm more robust with respect to noise as well as to large inter-frame disturbances, while operating at real-time rates. For dim object tracking, we include new advancements to a particle non-linear filtering-based track-before-detect (TrbD) algorithm. The new TrbD algorithm includes both real-time full image search for resolved objects not yet in track and joint super-resolution and tracking of individual objects in closely spaced object (CSO) clusters. The real-time full image search provides near-optimal detection and tracking of multiple extremely dim, maneuvering objects/clusters. The super-resolution and tracking CSO TrbD algorithm provides efficient near-optimal estimation of the number of unresolved objects in a CSO cluster, as well as the locations, velocities, accelerations, and intensities of the individual objects. We demonstrate that the algorithm is able to accurately estimate the number of CSO objects and their locations when the initial uncertainty on the number of objects is large. We demonstrate performance of the TrbD algorithm both for satellite-based and surface-based EO/IR surveillance scenarios.

  16. Wired and Wireless Camera Triggering with Arduino

    NASA Astrophysics Data System (ADS)

    Kauhanen, H.; Rönnholm, P.

    2017-10-01

    Synchronous triggering is an important task that allows simultaneous data capture from multiple cameras. Accurate synchronization enables 3D measurements of moving objects or from a moving platform. In this paper, we describe one wired and four wireless variations of Arduino-based low-cost remote trigger systems designed to provide a synchronous trigger signal for industrial cameras. Our wireless systems utilize 315 MHz or 434 MHz frequencies with noise filtering capacitors. In order to validate the synchronization accuracy, we developed a prototype of a rotating trigger detection system (named RoTriDeS). This system is suitable to detect the triggering accuracy of global shutter cameras. As a result, the wired system indicated an 8.91 μs mean triggering time difference between two cameras. Corresponding mean values for the four wireless triggering systems varied between 7.92 and 9.42 μs. Presented values include both camera-based and trigger-based desynchronization. Arduino-based triggering systems appeared to be feasible, and they have the potential to be extended to more complicated triggering systems.

  17. Finite difference time domain modeling of steady state scattering from jet engines with moving turbine blades

    NASA Technical Reports Server (NTRS)

    Ryan, Deirdre A.; Langdon, H. Scott; Beggs, John H.; Steich, David J.; Luebbers, Raymond J.; Kunz, Karl S.

    1992-01-01

    The approach chosen to model steady state scattering from jet engines with moving turbine blades is based upon the Finite Difference Time Domain (FDTD) method. The FDTD method is a numerical electromagnetic program based upon the direct solution in the time domain of Maxwell's time dependent curl equations throughout a volume. One of the strengths of this method is the ability to model objects with complicated shape and/or material composition. General time domain functions may be used as source excitations. For example, a plane wave excitation may be specified as a pulse containing many frequencies and at any incidence angle to the scatterer. A best fit to the scatterer is accomplished using cubical cells in the standard cartesian implementation of the FDTD method. The material composition of the scatterer is determined by specifying its electrical properties at each cell on the scatterer. Thus, the FDTD method is a suitable choice for problems with complex geometries evaluated at multiple frequencies. It is assumed that the reader is familiar with the FDTD method.

  18. Acoustic field in unsteady moving media

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Maestrello, L.; Ting, L.

    1995-01-01

    In the interaction of an acoustic field with a moving airframe the authors encounter a canonical initial value problem for an acoustic field induced by an unsteady source distribution, q(t,x) with q equivalent to 0 for t less than or equal to 0, in a medium moving with a uniform unsteady velocity U(t)i in the coordinate system x fixed on the airframe. Signals issued from a source point S in the domain of dependence D of an observation point P at time t will arrive at point P more than once corresponding to different retarded times, Tau in the interval (0, t). The number of arrivals is called the multiplicity of the point S. The multiplicity equals 1 if the velocity U remains subsonic and can be greater when U becomes supersonic. For an unsteady uniform flow U(t)i, rules are formulated for defining the smallest number of I subdomains V(sub i) of D with the union of V(sub i) equal to D. Each subdomain has multiplicity 1 and a formula for the corresponding retarded time. The number of subdomains V(sub i) with nonempty intersection is the multiplicity m of the intersection. The multiplicity is at most I. Examples demonstrating these rules are presented for media at accelerating and/or decelerating supersonic speed.

  19. Passive synthetic aperture radar imaging of ground moving targets

    NASA Astrophysics Data System (ADS)

    Wacks, Steven; Yazici, Birsen

    2012-05-01

    In this paper we present a method for imaging ground moving targets using passive synthetic aperture radar. A passive radar imaging system uses small, mobile receivers that do not radiate any energy. For these reasons, passive imaging systems result in signicant cost, manufacturing, and stealth advantages. The received signals are obtained by multiple airborne receivers collecting scattered waves due to illuminating sources of opportunity such as commercial television, radio, and cell phone towers. We describe a novel forward model and a corresponding ltered-backprojection type image reconstruction method combined with entropy optimization. Our method determines the location and velocity of multiple targets moving at dierent velocities. Furthermore, it can accommodate arbitrary imaging geometries. we present numerical simulations to verify the imaging method.

  20. Tracking and recognition of multiple human targets moving in a wireless pyroelectric infrared sensor network.

    PubMed

    Xiong, Ji; Li, Fangmin; Zhao, Ning; Jiang, Na

    2014-04-22

    With characteristics of low-cost and easy deployment, the distributed wireless pyroelectric infrared sensor network has attracted extensive interest, which aims to make it an alternate infrared video sensor in thermal biometric applications for tracking and identifying human targets. In these applications, effectively processing signals collected from sensors and extracting the features of different human targets has become crucial. This paper proposes the application of empirical mode decomposition and the Hilbert-Huang transform to extract features of moving human targets both in the time domain and the frequency domain. Moreover, the support vector machine is selected as the classifier. The experimental results demonstrate that by using this method the identification rates of multiple moving human targets are around 90%.

  1. Geometric Reasoning for Automated Planning

    NASA Technical Reports Server (NTRS)

    Clement, Bradley J.; Knight, Russell L.; Broderick, Daniel

    2012-01-01

    An important aspect of mission planning for NASA s operation of the International Space Station is the allocation and management of space for supplies and equipment. The Stowage, Configuration Analysis, and Operations Planning teams collaborate to perform the bulk of that planning. A Geometric Reasoning Engine is developed in a way that can be shared by the teams to optimize item placement in the context of crew planning. The ISS crew spends (at the time of this writing) a third or more of their time moving supplies and equipment around. Better logistical support and optimized packing could make a significant impact on operational efficiency of the ISS. Currently, computational geometry and motion planning do not focus specifically on the optimized orientation and placement of 3D objects based on multiple distance and containment preferences and constraints. The software performs reasoning about the manipulation of 3D solid models in order to maximize an objective function based on distance. It optimizes for 3D orientation and placement. Spatial placement optimization is a general problem and can be applied to object packing or asset relocation.

  2. Uncued Low SNR Detection with Likelihood from Image Multi Bernoulli Filter

    NASA Astrophysics Data System (ADS)

    Murphy, T.; Holzinger, M.

    2016-09-01

    Both SSA and SDA necessitate uncued, partially informed detection and orbit determination efforts for small space objects which often produce only low strength electro-optical signatures. General frame to frame detection and tracking of objects includes methods such as moving target indicator, multiple hypothesis testing, direct track-before-detect methods, and random finite set based multiobject tracking. This paper will apply the multi-Bernoilli filter to low signal-to-noise ratio (SNR), uncued detection of space objects for space domain awareness applications. The primary novel innovation in this paper is a detailed analysis of the existing state-of-the-art likelihood functions and a likelihood function, based on a binary hypothesis, previously proposed by the authors. The algorithm is tested on electro-optical imagery obtained from a variety of sensors at Georgia Tech, including the GT-SORT 0.5m Raven-class telescope, and a twenty degree field of view high frame rate CMOS sensor. In particular, a data set of an extended pass of the Hitomi Astro-H satellite approximately 3 days after loss of communication and potential break up is examined.

  3. What makes a movement a gesture?

    PubMed

    Novack, Miriam A; Wakefield, Elizabeth M; Goldin-Meadow, Susan

    2016-01-01

    Theories of how adults interpret the actions of others have focused on the goals and intentions of actors engaged in object-directed actions. Recent research has challenged this assumption, and shown that movements are often interpreted as being for their own sake (Schachner & Carey, 2013). Here we postulate a third interpretation of movement-movement that represents action, but does not literally act on objects in the world. These movements are gestures. In this paper, we describe a framework for predicting when movements are likely to be seen as representations. In Study 1, adults described one of three scenes: (1) an actor moving objects, (2) an actor moving her hands in the presence of objects (but not touching them) or (3) an actor moving her hands in the absence of objects. Participants systematically described the movements as depicting an object-directed action when the actor moved objects, and favored describing the movements as depicting movement for its own sake when the actor produced the same movements in the absence of objects. However, participants favored describing the movements as representations when the actor produced the movements near, but not on, the objects. Study 2 explored two additional features-the form of an actor's hands and the presence of speech-like sounds-to test the effect of context on observers' classification of movement as representational. When movements are seen as representations, they have the power to influence communication, learning, and cognition in ways that movement for its own sake does not. By incorporating representational gesture into our framework for movement analysis, we take an important step towards developing a more cohesive understanding of action-interpretation. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    PubMed Central

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  5. Vacuum force

    NASA Astrophysics Data System (ADS)

    Han, Yongquan

    2015-03-01

    To study on vacuum force, we must clear what is vacuum, vacuum is a space do not have any air and also ray. There is not exist an absolute the vacuum of space. The vacuum of space is relative, so that the vacuum force is relative. There is a certain that vacuum vacuum space exists. In fact, the vacuum space is relative, if the two spaces compared to the existence of relative vacuum, there must exist a vacuum force, and the direction of the vacuum force point to the vacuum region. Any object rotates and radiates. Rotate bend radiate- centripetal, gravity produced, relative gravity; non gravity is the vacuum force. Gravity is centripetal, is a trend that the objects who attracted wants to Centripetal, or have been do Centripetal movement. Any object moves, so gravity makes the object curve movement, that is to say, the radiation range curve movement must be in the gravitational objects, gravity must be existed in non vacuum region, and make the object who is in the region of do curve movement (for example: The earth moves around the sun), or final attracted in the form gravitational objects, and keep relatively static with attract object. (for example: objects on the earth moves but can't reach the first cosmic speed).

  6. Realism and Effectiveness of Robotic Moving Targets

    DTIC Science & Technology

    2017-04-01

    scenario or be manually controlled . The targets can communicate with other nearby targets, which means they can move independently, as a group , or...present a realistic three- dimensional human-sized target that can freely move with semi-autonomous control . The U.S. Army Research Institute for...Procedure: Performance and survey data were collected during multiple training exercises from Soldiers who engaged the RHTTs. Different groups

  7. Coordinated control of micro-grid based on distributed moving horizon control.

    PubMed

    Ma, Miaomiao; Shao, Liyang; Liu, Xiangjie

    2018-05-01

    This paper proposed the distributed moving horizon coordinated control scheme for the power balance and economic dispatch problems of micro-grid based on distributed generation. We design the power coordinated controller for each subsystem via moving horizon control by minimizing a suitable objective function. The objective function of distributed moving horizon coordinated controller is chosen based on the principle that wind power subsystem has the priority to generate electricity while photovoltaic power generation coordinates with wind power subsystem and the battery is only activated to meet the load demand when necessary. The simulation results illustrate that the proposed distributed moving horizon coordinated controller can allocate the output power of two generation subsystems reasonably under varying environment conditions, which not only can satisfy the load demand but also limit excessive fluctuations of output power to protect the power generation equipment. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Multiple Intelligences for Differentiated Learning

    ERIC Educational Resources Information Center

    Williams, R. Bruce

    2007-01-01

    There is an intricate literacy to Gardner's multiple intelligences theory that unlocks key entry points for differentiated learning. Using a well-articulated framework, rich with graphic representations, Williams provides a comprehensive discussion of multiple intelligences. He moves the teacher and students from curiosity, to confidence, to…

  9. Illusory object motion in the centre of a radial pattern: The Pursuit-Pursuing illusion.

    PubMed

    Ito, Hiroyuki

    2012-01-01

    A circular object placed in the centre of a radial pattern consisting of thin sectors was found to cause a robust motion illusion. During eye-movement pursuit of a moving target, the presently described stimulus produced illusory background-object motion in the same direction as that of the eye movement. In addition, the display induced illusory stationary perception of a moving object against the whole display motion. In seven experiments, the characteristics of the illusion were examined in terms of luminance relationships and figural characteristics of the radial pattern. Some potential explanations for these findings are discussed.

  10. Interpretation of the function of the striate cortex

    NASA Astrophysics Data System (ADS)

    Garner, Bernardette M.; Paplinski, Andrew P.

    2000-04-01

    Biological neural networks do not require retraining every time objects move in the visual field. Conventional computer neural networks do not share this shift-invariance. The brain compensates for movements in the head, body, eyes and objects by allowing the sensory data to be tracked across the visual field. The neurons in the striate cortex respond to objects moving across the field of vision as is seen in many experiments. It is proposed, that the neurons in the striate cortex allow continuous angle changes needed to compensate for changes in orientation of the head, eyes and the motion of objects in the field of vision. It is hypothesized that the neurons in the striate cortex form a system that allows for the translation, some rotation and scaling of objects and provides a continuity of objects as they move relative to other objects. The neurons in the striate cortex respond to features which are fundamental to sight, such as orientation of lines, direction of motion, color and contrast. The neurons that respond to these features are arranged on the cortex in a way that depends on the features they are responding to and on the area of the retina from which they receive their inputs.

  11. Security Implications of OPC, OLE, DCOM, and RPC in Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2006-01-01

    OPC is a collection of software programming standards and interfaces used in the process control industry. It is intended to provide open connectivity and vendor equipment interoperability. The use of OPC technology simplifies the development of control systems that integrate components from multiple vendors and support multiple control protocols. OPC-compliant products are available from most control system vendors, and are widely used in the process control industry. OPC was originally known as OLE for Process Control; the first standards for OPC were based on underlying services in the Microsoft Windows computing environment. These underlying services (OLE [Object Linking and Embedding],more » DCOM [Distributed Component Object Model], and RPC [Remote Procedure Call]) have been the source of many severe security vulnerabilities. It is not feasible to automatically apply vendor patches and service packs to mitigate these vulnerabilities in a control systems environment. Control systems using the original OPC data access technology can thus inherit the vulnerabilities associated with these services. Current OPC standardization efforts are moving away from the original focus on Microsoft protocols, with a distinct trend toward web-based protocols that are independent of any particular operating system. However, the installed base of OPC equipment consists mainly of legacy implementations of the OLE for Process Control protocols.« less

  12. Algorithms for detection of objects in image sequences captured from an airborne imaging system

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak

    1995-01-01

    This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.

  13. A solution for two-dimensional mazes with use of chaotic dynamics in a recurrent neural network model.

    PubMed

    Suemitsu, Yoshikazu; Nara, Shigetoshi

    2004-09-01

    Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.

  14. Walking through doorways causes forgetting: Event structure or updating disruption?

    PubMed

    Pettijohn, Kyle A; Radvansky, Gabriel A

    2016-11-01

    According to event cognition theory, people segment experience into separate event models. One consequence of this segmentation is that when people transport objects from one location to another, memory is worse than if people move across a large location. In two experiments participants navigated through a virtual environment, and recognition memory was tested in either the presence or the absence of a location shift for objects that were recently interacted with (i.e., just picked up or set down). Of particular concern here is whether this location updating effect is due to (a) differences in retention intervals as a result of the navigation process, (b) a temporary disruption in cognitive processing that may occur as a result of the updating processes, or (c) a need to manage multiple event models, as has been suggested in prior research. Experiment 1 explored whether retention interval is driving this effect by recording travel times from the acquisition of an object and the probe time. The results revealed that travel times were similar, thereby rejecting a retention interval explanation. Experiment 2 explored whether a temporary disruption in processing is producing the effect by introducing a 3-second delay prior to the presentation of a memory probe. The pattern of results was not affected by adding a delay, thereby rejecting a temporary disruption account. These results are interpreted in the context of the event horizon model, which suggests that when there are multiple event models that contain common elements there is interference at retrieval, which compromises performance.

  15. Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration.

    PubMed

    Wahn, Basil; König, Peter

    2015-01-01

    Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.

  16. A Mixed-Methods Evaluation of the "Move It Move It!" Before-School Incentive-Based Physical Activity Programme

    ERIC Educational Resources Information Center

    Garnett, Bernice R.; Becker, Kelly; Vierling, Danielle; Gleason, Cara; DiCenzo, Danielle; Mongeon, Louise

    2017-01-01

    Objective: Less than half of young people in the USA are meeting the daily physical activity requirements of at least 60 minutes of moderate or vigorous physical activity. A mixed-methods pilot feasibility assessment of "Move it Move it!" was conducted in the Spring of 2014 to assess the impact of a before-school physical activity…

  17. MoveU? Assessing a Social Marketing Campaign to Promote Physical Activity

    ERIC Educational Resources Information Center

    Scarapicchia, Tanya M. F.; Sabiston, Catherine M. F.; Brownrigg, Michelle; Blackburn-Evans, Althea; Cressy, Jill; Robb, Janine; Faulkner, Guy E. J.

    2015-01-01

    Objective: MoveU is a social marketing initiative aimed at increasing moderate-to-vigorous physical activity (MVPA) among undergraduate students. Using the Hierarchy of Effects model (HOEM), this study identified awareness of MoveU and examined associations between awareness, outcome expectations, self-efficacy, intentions, and MVPA. Participants:…

  18. Super-resolution photoacoustic microscopy using joint sparsity

    NASA Astrophysics Data System (ADS)

    Burgholzer, P.; Haltmeier, M.; Berer, T.; Leiss-Holzinger, E.; Murray, T. W.

    2017-07-01

    We present an imaging method that uses the random optical speckle patterns that naturally emerge as light propagates through strongly scattering media as a structured illumination source for photoacoustic imaging. Our approach, termed blind structured illumination photoacoustic microscopy (BSIPAM), was inspired by recent work in fluorescence microscopy where super-resolution imaging was demonstrated using multiple unknown speckle illumination patterns. We extend this concept to the multiple scattering domain using photoacoustics (PA), with the speckle pattern serving to generate ultrasound. The optical speckle pattern that emerges as light propagates through diffuse media provides structured illumination to an object placed behind a scattering wall. The photoacoustic signal produced by such illumination is detected using a focused ultrasound transducer. We demonstrate through both simulation and experiment, that by acquiring multiple photoacoustic images, each produced by a different random and unknown speckle pattern, an image of an absorbing object can be reconstructed with a spatial resolution far exceeding that of the ultrasound transducer. We experimentally and numerically demonstrate a gain in resolution of more than a factor of two by using multiple speckle illuminations. The variations in the photoacoustic signals generated with random speckle patterns are utilized in BSIPAM using a novel reconstruction algorithm. Exploiting joint sparsity, this algorithm is capable of reconstructing the absorbing structure from measured PA signals with a resolution close to the speckle size. Another way to excite random excitation for photoacoustic imaging are small absorbing particles, including contrast agents, which flow through small vessels. For such a set-up, the joint-sparsity is generated by the fact that all the particles move in the same vessels. Structured illumination in that case is not necessary.

  19. Method and System for Object Recognition Search

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor); Duong, Vu A. (Inventor); Stubberud, Allen R. (Inventor)

    2012-01-01

    A method for object recognition using shape and color features of the object to be recognized. An adaptive architecture is used to recognize and adapt the shape and color features for moving objects to enable object recognition.

  20. Motion Alters Color Appearance

    PubMed Central

    Hong, Sang-Wook; Kang, Min-Suk

    2016-01-01

    Chromatic induction compellingly demonstrates that chromatic context as well as spectral lights reflected from an object determines its color appearance. Here, we show that when one colored object moves around an identical stationary object, the perceived saturation of the stationary object decreases dramatically whereas the saturation of the moving object increases. These color appearance shifts in the opposite directions suggest that normalization induced by the object’s motion may mediate the shift in color appearance. We ruled out other plausible alternatives such as local adaptation, attention, and transient neural responses that could explain the color shift without assuming interaction between color and motion processing. These results demonstrate that the motion of an object affects both its own color appearance and the color appearance of a nearby object, suggesting a tight coupling between color and motion processing. PMID:27824098

  1. A multiple objective optimization approach to quality control

    NASA Technical Reports Server (NTRS)

    Seaman, Christopher Michael

    1991-01-01

    The use of product quality as the performance criteria for manufacturing system control is explored. The goal in manufacturing, for economic reasons, is to optimize product quality. The problem is that since quality is a rather nebulous product characteristic, there is seldom an analytic function that can be used as a measure. Therefore standard control approaches, such as optimal control, cannot readily be applied. A second problem with optimizing product quality is that it is typically measured along many dimensions: there are many apsects of quality which must be optimized simultaneously. Very often these different aspects are incommensurate and competing. The concept of optimality must now include accepting tradeoffs among the different quality characteristics. These problems are addressed using multiple objective optimization. It is shown that the quality control problem can be defined as a multiple objective optimization problem. A controller structure is defined using this as the basis. Then, an algorithm is presented which can be used by an operator to interactively find the best operating point. Essentially, the algorithm uses process data to provide the operator with two pieces of information: (1) if it is possible to simultaneously improve all quality criteria, then determine what changes to the process input or controller parameters should be made to do this; and (2) if it is not possible to improve all criteria, and the current operating point is not a desirable one, select a criteria in which a tradeoff should be made, and make input changes to improve all other criteria. The process is not operating at an optimal point in any sense if no tradeoff has to be made to move to a new operating point. This algorithm ensures that operating points are optimal in some sense and provides the operator with information about tradeoffs when seeking the best operating point. The multiobjective algorithm was implemented in two different injection molding scenarios: tuning of process controllers to meet specified performance objectives and tuning of process inputs to meet specified quality objectives. Five case studies are presented.

  2. Direct imaging and new technologies to search for substellar companions around MGs cool dwarfs

    NASA Astrophysics Data System (ADS)

    Gálvez-Ortiz, M. C.; Clarke, J. R. A.; Pinfield, D. J.; Folkes, S. L.; Jenkins, J. S.; García Pérez, A. E.; Burningham, B.; Day-Jones, A. C.; Jones, H. R. A.

    2011-07-01

    We describe here our project based in a search for sub-stellar companions (brown dwarfs and exo-planets) around young ultra-cool dwarfs (UCDs) and characterise their properties. We will use current and future technology (high contrast imaging, high-precision Doppler determinations) from the ground and space (VLT, ELT and JWST), to find companions to young objects. Members of young moving groups (MGs) have clear advantages in this field. We compiled a catalogue of young UCD objects and studied their membership to five known young moving groups: Local Association (Pleiades moving group, 20-150 Myr), Ursa Mayor group (Sirius supercluster, 300 Myr), Hyades supercluster (600 Myr), IC 2391 supercluster (35 Myr) and Castor moving group (200 Myr). To assess them as members we used different kinematic and spectroscopic criteria.

  3. General principles in motion vision: color blindness of object motion depends on pattern velocity in honeybee and goldfish.

    PubMed

    Stojcev, Maja; Radtke, Nils; D'Amaro, Daniele; Dyer, Adrian G; Neumeyer, Christa

    2011-07-01

    Visual systems can undergo striking adaptations to specific visual environments during evolution, but they can also be very "conservative." This seems to be the case in motion vision, which is surprisingly similar in species as distant as honeybee and goldfish. In both visual systems, motion vision measured with the optomotor response is color blind and mediated by one photoreceptor type only. Here, we ask whether this is also the case if the moving stimulus is restricted to a small part of the visual field, and test what influence velocity may have on chromatic motion perception. Honeybees were trained to discriminate between clockwise- and counterclockwise-rotating sector disks. Six types of disk stimuli differing in green receptor contrast were tested using three different rotational velocities. When green receptor contrast was at a minimum, bees were able to discriminate rotation directions with all colored disks at slow velocities of 6 and 12 Hz contrast frequency but not with a relatively high velocity of 24 Hz. In the goldfish experiment, the animals were trained to detect a moving red or blue disk presented in a green surround. Discrimination ability between this stimulus and a homogenous green background was poor when the M-cone type was not or only slightly modulated considering high stimulus velocity (7 cm/s). However, discrimination was improved with slower stimulus velocities (4 and 2 cm/s). These behavioral results indicate that there is potentially an object motion system in both honeybee and goldfish, which is able to incorporate color information at relatively low velocities but is color blind with higher speed. We thus propose that both honeybees and goldfish have multiple subsystems of object motion, which include achromatic as well as chromatic processing.

  4. Moving template analysis of crack growth. 1: Procedure development

    NASA Astrophysics Data System (ADS)

    Padovan, Joe; Guo, Y. H.

    1994-06-01

    Based on a moving template procedure, this two part series will develop a method to follow the crack tip physics in a self-adaptive manner which provides a uniformly accurate prediction of crack growth. For multiple crack environments, this is achieved by attaching a moving template to each crack tip. The templates are each individually oriented to follow the associated growth orientation and rate. In this part, the essentials of the procedure are derived for application to fatigue crack environments. Overall the scheme derived possesses several hierarchical levels, i.e. the global model, the interpolatively tied moving template, and a multilevel element death option to simulate the crack wake. To speed up computation, the hierarchical polytree scheme is used to reorganize the global stiffness inversion process. In addition to developing the various features of the scheme, the accuracy of predictions for various crack lengths is also benchmarked. Part 2 extends the scheme to multiple crack problems. Extensive benchmarking is also presented to verify the scheme.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winnek, D.F.

    A method and apparatus for making X-ray photographs which can be viewed in three dimensions with the use of a lenticular screen. The apparatus includes a linear tomograph having a moving X-ray source on one side of a support on which an object is to be placed so that X-rays can pass through the object to the opposite side of the support. A movable cassette on the opposite side of the support moves in a direction opposite to the direction of travel of the X-ray source as the source moves relative to the support. The cassette has an intensifying screen,more » a grating mask provided with uniformly spaced slots for passing X-rays, a lenticular member adjacent to the mask, and a photographic emulsion adjacent to the opposite side of the lenticular member. The cassette has a power device for moving the lenticular member and the emulsion relative to the mask a distance equal to the spacing between a pair of adjacent slots in the mask. The X-rays from the source, after passing through an object on the support, pass into the cassette through the slots of the mask and are focused on the photographic emulsion to result in a continuum of X-ray views of the object. When the emulsion is developed and viewed through the lenticular member, the object can be seen in three dimensions.« less

  6. Moving towards Inclusion? The First-Degree Results of Students with and without Disabilities in Higher Education in the UK: 1998-2005

    ERIC Educational Resources Information Center

    Pumfrey, Peter

    2008-01-01

    Is the currently selective UK higher education (HE) system becoming more inclusive? Between 1998/99 and 2004/05, in relation to talented students with disabilities, has the UK government's HE policy implementation moved HE towards achieving two of the government's key HE objectives for 2010? These objectives are: (a) increasing HE participation…

  7. Image registration of naval IR images

    NASA Astrophysics Data System (ADS)

    Rodland, Arne J.

    1996-06-01

    In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.

  8. Tracking and Recognition of Multiple Human Targets Moving in a Wireless Pyroelectric Infrared Sensor Network

    PubMed Central

    Xiong, Ji; Li, Fangmin; Zhao, Ning; Jiang, Na

    2014-01-01

    With characteristics of low-cost and easy deployment, the distributed wireless pyroelectric infrared sensor network has attracted extensive interest, which aims to make it an alternate infrared video sensor in thermal biometric applications for tracking and identifying human targets. In these applications, effectively processing signals collected from sensors and extracting the features of different human targets has become crucial. This paper proposes the application of empirical mode decomposition and the Hilbert-Huang transform to extract features of moving human targets both in the time domain and the frequency domain. Moreover, the support vector machine is selected as the classifier. The experimental results demonstrate that by using this method the identification rates of multiple moving human targets are around 90%. PMID:24759117

  9. Command Wire Sensor Measurements

    DTIC Science & Technology

    2012-09-01

    coupled with the extreme harsh terrain has meant that few of these techniques have proved robust enough when moved from the laboratory to the field...to image stationary objects and does not accurately image moving targets. Moving targets can be seriously distorted and displaced from their true...battlefield and for imaging of fixed targets. Moving targets can be detected with a SAR if they have a Doppler frequency shift greater than the

  10. Coaching Ourselves to Perform Multiplicity and Advocacy: A Response to Stephens and Mills

    ERIC Educational Resources Information Center

    Cahnmann-Taylor, Melisa

    2014-01-01

    Cahnmann-Taylor draws on Boalian Theatre of the Oppressed to offer a practice for literacy teachers and coaches that can open up multiple perspectives and multiple levels of intentions and motivations for a teacher's decision making. She challenges coaches and teachers to engage in artistic examinations of multiplicity to move toward performing…

  11. GMTI Direction of Arrival Measurements from Multiple Phase Centers.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doerry, Armin W.; Bickel, Douglas L.

    2015-03-01

    Ground Moving Target Indicator (GMTI) radar attempts to detect and locate targets with unknown motion. Very slow-moving targets are difficult to locate in the presence of surrounding clutter. This necessitates multiple antenna phase centers (or equivalent) to offer independent Direction of Arrival (DOA) measurements. DOA accuracy and precision generally remains dependent on target Signal-to-Noise Ratio (SNR), Clutter-toNoise Ratio (CNR), scene topography, interfering signals, and a number of antenna parameters. This is true even for adaptive techniques like Space-Time-AdaptiveProcessing (STAP) algorithms.

  12. Using articulated scene models for dynamic 3d scene analysis in vista spaces

    NASA Astrophysics Data System (ADS)

    Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven

    2010-09-01

    In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.

  13. The left inferior parietal lobe represents stored hand-postures for object use and action prediction

    PubMed Central

    van Elk, Michiel

    2014-01-01

    Action semantics enables us to plan actions with objects and to predict others' object-directed actions as well. Previous studies have suggested that action semantics are represented in a fronto-parietal action network that has also been implicated to play a role in action observation. In the present fMRI study it was investigated how activity within this network changes as a function of the predictability of an action involving multiple objects and requiring the use of action semantics. Participants performed an action prediction task in which they were required to anticipate the use of a centrally presented object that could be moved to an associated target object (e.g., hammer—nail). The availability of actor information (i.e., presenting a hand grasping the central object) and the number of possible target objects (i.e., 0, 1, or 2 target objects) were independently manipulated, resulting in different levels of predictability. It was found that making an action prediction based on actor information resulted in an increased activation in the extrastriate body area (EBA) and the fronto-parietal action observation network (AON). Predicting actions involving a target object resulted in increased activation in the bilateral IPL and frontal motor areas. Within the AON, activity in the left inferior parietal lobe (IPL) and the left premotor cortex (PMC) increased as a function of the level of action predictability. Together these findings suggest that the left IPL represents stored hand-postures that can be used for planning object-directed actions and for predicting other's actions as well. PMID:24795681

  14. Congruity Effects in Time and Space: Behavioral and ERP Measures

    ERIC Educational Resources Information Center

    Teuscher, Ursina; McQuire, Marguerite; Collins, Jennifer; Coulson, Seana

    2008-01-01

    Two experiments investigated whether motion metaphors for time affected the perception of spatial motion. Participants read sentences either about literal motion through space or metaphorical motion through time written from either the ego-moving or object-moving perspective. Each sentence was followed by a cartoon clip. Smiley-moving clips showed…

  15. Reinventing User Applications for Mission Control

    NASA Technical Reports Server (NTRS)

    Trimble, Jay Phillip; Crocker, Alan R.

    2010-01-01

    In 2006, NASA Ames Research Center's (ARC) Intelligent Systems Division, and NASA Johnson Space Centers (JSC) Mission Operations Directorate (MOD) began a collaboration to move user applications for JSC's mission control center to a new software architecture, intended to replace the existing user applications being used for the Space Shuttle and the International Space Station. It must also carry NASA/JSC mission operations forward to the future, meeting the needs for NASA's exploration programs beyond low Earth orbit. Key requirements for the new architecture, called Mission Control Technologies (MCT) are that end users must be able to compose and build their own software displays without the need for programming, or direct support and approval from a platform services organization. Developers must be able to build MCT components using industry standard languages and tools. Each component of MCT must be interoperable with other components, regardless of what organization develops them. For platform service providers and MOD management, MCT must be cost effective, maintainable and evolvable. MCT software is built from components that are presented to users as composable user objects. A user object is an entity that represents a domain object such as a telemetry point, a command, a timeline, an activity, or a step in a procedure. User objects may be composed and reused, for example a telemetry point may be used in a traditional monitoring display, and that same telemetry user object may be composed into a procedure step. In either display, that same telemetry point may be shown in different views, such as a plot, an alpha numeric, or a meta-data view and those views may be changed live and in place. MCT presents users with a single unified user environment that contains all the objects required to perform applicable flight controller tasks, thus users do not have to use multiple applications, the traditional boundaries that exist between multiple heterogeneous applications disappear, leaving open the possibility of new operations concepts that are not constrained by the traditional applications paradigm.

  16. Context-aware pattern discovery for moving object trajectories

    NASA Astrophysics Data System (ADS)

    Sharif, Mohammad; Asghar Alesheikh, Ali; Kaffash Charandabi, Neda

    2018-05-01

    Movement of point objects are highly sensitive to the underlying situations and conditions during the movement, which are known as contexts. Analyzing movement patterns, while accounting the contextual information, helps to better understand how point objects behave in various contexts and how contexts affect their trajectories. One potential solution for discovering moving objects patterns is analyzing the similarities of their trajectories. This article, therefore, contextualizes the similarity measure of trajectories by not only their spatial footprints but also a notion of internal and external contexts. The dynamic time warping (DTW) method is employed to assess the multi-dimensional similarities of trajectories. Then, the results of similarity searches are utilized in discovering the relative movement patterns of the moving point objects. Several experiments are conducted on real datasets that were obtained from commercial airplanes and the weather information during the flights. The results yielded the robustness of DTW method in quantifying the commonalities of trajectories and discovering movement patterns with 80 % accuracy. Moreover, the results revealed the importance of exploiting contextual information because it can enhance and restrict movements.

  17. Impact-induced acceleration by obstacles

    NASA Astrophysics Data System (ADS)

    Corbin, N. A.; Hanna, J. A.; Royston, W. R.; Singh, H.; Warner, R. B.

    2018-05-01

    We explore a surprising phenomenon in which an obstruction accelerates, rather than decelerates, a moving flexible object. It has been claimed that the right kind of discrete chain falling onto a table falls faster than a free-falling body. We confirm and quantify this effect, reveal its complicated dependence on angle of incidence, and identify multiple operative mechanisms. Prior theories for direct impact onto flat surfaces, which involve a single constitutive parameter, match our data well if we account for a characteristic delay length that must impinge before the onset of excess acceleration. Our measurements provide a robust determination of this parameter. This supports the possibility of modeling such discrete structures as continuous bodies with a complicated constitutive law of impact that includes angle of incidence as an input.

  18. Illusory object motion in the centre of a radial pattern: The Pursuit–Pursuing illusion

    PubMed Central

    Ito, Hiroyuki

    2012-01-01

    A circular object placed in the centre of a radial pattern consisting of thin sectors was found to cause a robust motion illusion. During eye-movement pursuit of a moving target, the presently described stimulus produced illusory background-object motion in the same direction as that of the eye movement. In addition, the display induced illusory stationary perception of a moving object against the whole display motion. In seven experiments, the characteristics of the illusion were examined in terms of luminance relationships and figural characteristics of the radial pattern. Some potential explanations for these findings are discussed. PMID:23145267

  19. Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Pretto, N.; Poiesi, F.

    2017-11-01

    We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.

  20. Speed skills: measuring the visual speed analyzing properties of primate MT neurons.

    PubMed

    Perrone, J A; Thiele, A

    2001-05-01

    Knowing the direction and speed of moving objects is often critical for survival. However, it is poorly understood how cortical neurons process the speed of image movement. Here we tested MT neurons using moving sine-wave gratings of different spatial and temporal frequencies, and mapped out the neurons' spatiotemporal frequency response profiles. The maps typically had oriented ridges of peak sensitivity as expected for speed-tuned neurons. The preferred speed estimate, derived from the orientation of the maps, corresponded well to the preferred speed when moving bars were presented. Thus, our data demonstrate that MT neurons are truly sensitive to the object speed. These findings indicate that MT is not only a key structure in the analysis of direction of motion and depth perception, but also in the analysis of object speed.

  1. Motion streaks do not influence the perceived position of stationary flashed objects.

    PubMed

    Pavan, Andrea; Bellacosa Marotti, Rosilari

    2012-01-01

    In the present study, we investigated whether motion streaks, produced by fast moving dots Geisler 1999, distort the positional map of stationary flashed objects producing the well-known motion-induced position shift illusion (MIPS). The illusion relies on motion-processing mechanisms that induce local distortions in the positional map of the stimulus which is derived by shape-processing mechanisms. To measure the MIPS, two horizontally offset Gaussian blobs, placed above and below a central fixation point, were flashed over two fields of dots moving in opposite directions. Subjects judged the position of the top Gaussian blob relative to the bottom one. The results showed that neither fast (motion streaks) nor slow moving dots influenced the perceived spatial position of the stationary flashed objects, suggesting that background motion does not interact with the shape-processing mechanisms involved in MIPS.

  2. Two-dimensional (2D) displacement measurement of moving objects using a new MEMS binocular vision system

    NASA Astrophysics Data System (ADS)

    Di, Si; Lin, Hui; Du, Ruxu

    2011-05-01

    Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.

  3. Position and orientation determination system and method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harpring, Lawrence J.; Farfan, Eduardo B.; Gordon, John R.

    A position determination system and method is provided that may be used for obtaining position and orientation information of a detector in a contaminated room. The system includes a detector, a sensor operably coupled to the detector, and a motor coupled to the sensor to move the sensor around the detector. A CPU controls the operation of the motor to move the sensor around the detector and determines distance and angle data from the sensor to an object. The method includes moving a sensor around the detector and measuring distance and angle data from the sensor to an object atmore » incremental positions around the detector.« less

  4. Early Program Development

    NASA Image and Video Library

    1996-06-20

    Engineers at one of MSFC's vacuum chambers begin testing a microthruster model. The purpose of these tests are to collect sufficient data that will enabe NASA to develop microthrusters that will move the Space Shuttle, a future space station, or any other space related vehicle with the least amount of expended energy. When something is sent into outer space, the forces that try to pull it back to Earth (gravity) are very small so that it only requires a very small force to move very large objects. In space, a force equal to a paperclip can move an object as large as a car. Microthrusters are used to produce these small forces.

  5. The influence of visual motion on interceptive actions and perception.

    PubMed

    Marinovic, Welber; Plooy, Annaliese M; Arnold, Derek H

    2012-05-01

    Visual information is an essential guide when interacting with moving objects, yet it can also be deceiving. For instance, motion can induce illusory position shifts, such that a moving ball can seem to have bounced past its true point of contact with the ground. Some evidence suggests illusory motion-induced position shifts bias pointing tasks to a greater extent than they do perceptual judgments. This, however, appears at odds with other findings and with our success when intercepting moving objects. Here we examined the accuracy of interceptive movements and of perceptual judgments in relation to simulated bounces. Participants were asked to intercept a moving disc at its bounce location by positioning a virtual paddle, and then to report where the disc had landed. Results showed that interceptive actions were accurate whereas perceptual judgments were inaccurate, biased in the direction of motion. Successful interceptions necessitated accurate information concerning both the location and timing of the bounce, so motor planning evidently had privileged access to an accurate forward model of bounce timing and location. This would explain why people can be accurate when intercepting a moving object, but lack insight into the accurate information that had guided their actions when asked to make a perceptual judgment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Space-based visual attention: a marker of immature selective attention in toddlers?

    PubMed

    Rivière, James; Brisson, Julie

    2014-11-01

    Various studies suggested that attentional difficulties cause toddlers' failure in some spatial search tasks. However, attention is not a unitary construct and this study investigated two attentional mechanisms: location selection (space-based attention) and object selection (object-based attention). We investigated how toddlers' attention is distributed in the visual field during a manual search task for objects moving out of sight, namely the moving boxes task. Results show that 2.5-year-olds who failed this task allocated more attention to the location of the relevant object than to the object itself. These findings suggest that in some manual search tasks the primacy of space-based attention over object-based attention could be a marker of immature selective attention in toddlers. © 2014 Wiley Periodicals, Inc.

  7. Gaze control for an active camera system by modeling human pursuit eye movements

    NASA Astrophysics Data System (ADS)

    Toelg, Sebastian

    1992-11-01

    The ability to stabilize the image of one moving object in the presence of others by active movements of the visual sensor is an essential task for biological systems, as well as for autonomous mobile robots. An algorithm is presented that evaluates the necessary movements from acquired visual data and controls an active camera system (ACS) in a feedback loop. No a priori assumptions about the visual scene and objects are needed. The algorithm is based on functional models of human pursuit eye movements and is to a large extent influenced by structural principles of neural information processing. An intrinsic object definition based on the homogeneity of the optical flow field of relevant objects, i.e., moving mainly fronto- parallel, is used. Velocity and spatial information are processed in separate pathways, resulting in either smooth or saccadic sensor movements. The program generates a dynamic shape model of the moving object and focuses its attention to regions where the object is expected. The system proved to behave in a stable manner under real-time conditions in complex natural environments and manages general object motion. In addition it exhibits several interesting abilities well-known from psychophysics like: catch-up saccades, grouping due to coherent motion, and optokinetic nystagmus.

  8. Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.

    PubMed

    Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo

    2013-05-06

    A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.

  9. A landmark effect in the perceived displacement of objects.

    PubMed

    Higgins, J Stephen; Wang, Ranxiao Frances

    2010-01-01

    Perceiving the displacement of an object after a visual distraction is an essential ability to interact with the world. Previous research has shown a bias to perceive the first object seen after a saccade as stable while the second one moving (landmark effect). The present study examines the generality and nature of this phenomenon. The landmark effect was observed in the absence of eye movements, when the two objects were obscured by a blank screen, a moving-pattern mask, or simply disappeared briefly before reappearing one after the other. The first reappearing object was not required to remain visible while the second object reappeared to induce the bias. The perceived direction of the displacement was mainly determined by the relative displacement of the two objects, suggesting that the landmark effect is primarily due to a landmark calibration mechanism.

  10. Geo-Referenced Dynamic Pushbroom Stereo Mosaics for 3D and Moving Target Extraction - A New Geometric Approach

    DTIC Science & Technology

    2009-12-01

    facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . We use the fact that all the...a moving platform, we will have to naturally and effectively handle obvious motion parallax and object occlusions in order to be able to detect ...facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . Based on the above two

  11. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.

    PubMed

    Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-08-23

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.

  12. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor

    PubMed Central

    Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-01-01

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520

  13. Target-locking acquisition with real-time confocal (TARC) microscopy.

    PubMed

    Lu, Peter J; Sims, Peter A; Oki, Hidekazu; Macarthur, James B; Weitz, David A

    2007-07-09

    We present a real-time target-locking confocal microscope that follows an object moving along an arbitrary path, even as it simultaneously changes its shape, size and orientation. This Target-locking Acquisition with Realtime Confocal (TARC) microscopy system integrates fast image processing and rapid image acquisition using a Nipkow spinning-disk confocal microscope. The system acquires a 3D stack of images, performs a full structural analysis to locate a feature of interest, moves the sample in response, and then collects the next 3D image stack. In this way, data collection is dynamically adjusted to keep a moving object centered in the field of view. We demonstrate the system's capabilities by target-locking freely-diffusing clusters of attractive colloidal particles, and activelytransported quantum dots (QDs) endocytosed into live cells free to move in three dimensions, for several hours. During this time, both the colloidal clusters and live cells move distances several times the length of the imaging volume.

  14. Predictive coding of visual object position ahead of moving objects revealed by time-resolved EEG decoding.

    PubMed

    Hogendoorn, Hinze; Burkitt, Anthony N

    2018-05-01

    Due to the delays inherent in neuronal transmission, our awareness of sensory events necessarily lags behind the occurrence of those events in the world. If the visual system did not compensate for these delays, we would consistently mislocalize moving objects behind their actual position. Anticipatory mechanisms that might compensate for these delays have been reported in animals, and such mechanisms have also been hypothesized to underlie perceptual effects in humans such as the Flash-Lag Effect. However, to date no direct physiological evidence for anticipatory mechanisms has been found in humans. Here, we apply multivariate pattern classification to time-resolved EEG data to investigate anticipatory coding of object position in humans. By comparing the time-course of neural position representation for objects in both random and predictable apparent motion, we isolated anticipatory mechanisms that could compensate for neural delays when motion trajectories were predictable. As well as revealing an early neural position representation (lag 80-90 ms) that was unaffected by the predictability of the object's trajectory, we demonstrate a second neural position representation at 140-150 ms that was distinct from the first, and that was pre-activated ahead of the moving object when it moved on a predictable trajectory. The latency advantage for predictable motion was approximately 16 ± 2 ms. To our knowledge, this provides the first direct experimental neurophysiological evidence of anticipatory coding in human vision, revealing the time-course of predictive mechanisms without using a spatial proxy for time. The results are numerically consistent with earlier animal work, and suggest that current models of spatial predictive coding in visual cortex can be effectively extended into the temporal domain. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Biological corridors and connectivity [Chapter 21

    Treesearch

    Samuel A. Cushman; Brad McRae; Frank Adriaensen; Paul Beier; Mark Shirley; Kathy Zeller

    2013-01-01

    The ability of individual animals to move across complex landscapes is critical for maintaining regional populations in the short term (Fahrig 2003; Cushman 2006), and for species to shift their geographic range in response to climate change (Heller & Zavaleta 2009). As organisms move through spatially complex landscapes, they respond to multiple...

  16. Building Bridges: Transitions from Elementary to Secondary School

    ERIC Educational Resources Information Center

    Tilleczek, Kate

    2008-01-01

    Most young people leave elementary school and move into some form of secondary school during early adolescence. At precisely the time that young people are navigating multiple developmental challenges (social, intellectual, academic, physical), they are expected to move between these intuitions of public education. The transition is commonly…

  17. Cooperative Robots to Observe Moving Targets: Review.

    PubMed

    Khan, Asif; Rinner, Bernhard; Cavallaro, Andrea

    2018-01-01

    The deployment of multiple robots for achieving a common goal helps to improve the performance, efficiency, and/or robustness in a variety of tasks. In particular, the observation of moving targets is an important multirobot application that still exhibits numerous open challenges, including the effective coordination of the robots. This paper reviews control techniques for cooperative mobile robots monitoring multiple targets. The simultaneous movement of robots and targets makes this problem particularly interesting, and our review systematically addresses this cooperative multirobot problem for the first time. We classify and critically discuss the control techniques: cooperative multirobot observation of multiple moving targets, cooperative search, acquisition, and track, cooperative tracking, and multirobot pursuit evasion. We also identify the five major elements that characterize this problem, namely, the coordination method, the environment, the target, the robot and its sensor(s). These elements are used to systematically analyze the control techniques. The majority of the studied work is based on simulation and laboratory studies, which may not accurately reflect real-world operational conditions. Importantly, while our systematic analysis is focused on multitarget observation, our proposed classification is useful also for related multirobot applications.

  18. Real-time moving objects detection and tracking from airborne infrared camera

    NASA Astrophysics Data System (ADS)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2017-10-01

    Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its capability to work in real-time.

  19. The fundamentals of average local variance--Part I: Detecting regular patterns.

    PubMed

    Bøcher, Peder Klith; McCloy, Keith R

    2006-02-01

    The method of average local variance (ALV) computes the mean of the standard deviation values derived for a 3 x 3 moving window on a successively coarsened image to produce a function of ALV versus spatial resolution. In developing ALV, the authors used approximately a doubling of the pixel size at each coarsening of the image. They hypothesized that ALV is low when the pixel size is smaller than the size of scene objects because the pixels on the object will have similar response values. When the pixel and objects are of similar size, they will tend to vary in response and the ALV values will increase. As the size of pixels increase further, more objects will be contained in a single pixel and ALV will decrease. The authors showed that various cover types produced single peak ALV functions that inexplicitly peaked when the pixel size was 1/2 to 3/4 of the object size. This paper reports on work done to explore the characteristics of the various forms of the ALV function and to understand the location of the peaks that occur in this function. The work was conducted using synthetically generated image data. The investigation showed that the hypothesis originally proposed in is not adequate. A new hypothesis is proposed that the ALV function has peak locations that are related to the geometric size of pattern structures in the scene. These structures are not always the same as scene objects. Only in cases where the size of and separation between scene objects are equal does the ALV function detect the size of the objects. In situations where the distance between scene objects are larger than their size, the ALV function has a peak at the object separation, not at the object size. This work has also shown that multiple object structures of different sizes and distances in the image provide multiple peaks in the ALV function and that some of these structures are not implicitly recognized as such from our perspective. However, the magnitude of these peaks depends on the response mix in the structures, complicating their interpretation and analysis. The analysis of the ALV Function is, thus, more complex than that generally reported in the literature.

  20. Direct Position Determination of Multiple Non-Circular Sources with a Moving Coprime Array.

    PubMed

    Zhang, Yankui; Ba, Bin; Wang, Daming; Geng, Wei; Xu, Haiyun

    2018-05-08

    Direct position determination (DPD) is currently a hot topic in wireless localization research as it is more accurate than traditional two-step positioning. However, current DPD algorithms are all based on uniform arrays, which have an insufficient degree of freedom and limited estimation accuracy. To improve the DPD accuracy, this paper introduces a coprime array to the position model of multiple non-circular sources with a moving array. To maximize the advantages of this coprime array, we reconstruct the covariance matrix by vectorization, apply a spatial smoothing technique, and converge the subspace data from each measuring position to establish the cost function. Finally, we obtain the position coordinates of the multiple non-circular sources. The complexity of the proposed method is computed and compared with that of other methods, and the Cramer⁻Rao lower bound of DPD for multiple sources with a moving coprime array, is derived. Theoretical analysis and simulation results show that the proposed algorithm is not only applicable to circular sources, but can also improve the positioning accuracy of non-circular sources. Compared with existing two-step positioning algorithms and DPD algorithms based on uniform linear arrays, the proposed technique offers a significant improvement in positioning accuracy with a slight increase in complexity.

  1. Coming to Terms with the Concept of Moving Species Threatened by Climate Change – A Systematic Review of the Terminology and Definitions

    PubMed Central

    Hällfors, Maria H.; Vaara, Elina M.; Hyvärinen, Marko; Oksanen, Markku; Schulman, Leif E.; Siipi, Helena; Lehvävirta, Susanna

    2014-01-01

    Intentional moving of species threatened by climate change is actively being discussed as a conservation approach. The debate, empirical studies, and policy development, however, are impeded by an inconsistent articulation of the idea. The discrepancy is demonstrated by the varying use of terms, such as assisted migration, assisted colonisation, or managed relocation, and their multiple definitions. Since this conservation approach is novel, and may for instance lead to legislative changes, it is important to aim for terminological consistency. The objective of this study is to analyse the suitability of terms and definitions used when discussing the moving of organisms as a response to climate change. An extensive literature search and review of the material (868 scientific publications) was conducted for finding hitherto used terms (N = 40) and definitions (N = 75), and these were analysed for their suitability. Based on the findings, it is argued that an appropriate term for a conservation approach relating to aiding the movement of organisms harmed by climate change is assisted migration defined as follows: Assisted migration means safeguarding biological diversity through the translocation of representatives of a species or population harmed by climate change to an area outside the indigenous range of that unit where it would be predicted to move as climate changes, were it not for anthropogenic dispersal barriers or lack of time. The differences between assisted migration and other conservation translocations are also discussed. A wide adoption of the clear and distinctive term and definition provided would allow more focused research on the topic and enable consistent implementation as practitioners could have the same understanding of the concept. PMID:25055023

  2. Coming to terms with the concept of moving species threatened by climate change - a systematic review of the terminology and definitions.

    PubMed

    Hällfors, Maria H; Vaara, Elina M; Hyvärinen, Marko; Oksanen, Markku; Schulman, Leif E; Siipi, Helena; Lehvävirta, Susanna

    2014-01-01

    Intentional moving of species threatened by climate change is actively being discussed as a conservation approach. The debate, empirical studies, and policy development, however, are impeded by an inconsistent articulation of the idea. The discrepancy is demonstrated by the varying use of terms, such as assisted migration, assisted colonisation, or managed relocation, and their multiple definitions. Since this conservation approach is novel, and may for instance lead to legislative changes, it is important to aim for terminological consistency. The objective of this study is to analyse the suitability of terms and definitions used when discussing the moving of organisms as a response to climate change. An extensive literature search and review of the material (868 scientific publications) was conducted for finding hitherto used terms (N = 40) and definitions (N = 75), and these were analysed for their suitability. Based on the findings, it is argued that an appropriate term for a conservation approach relating to aiding the movement of organisms harmed by climate change is assisted migration defined as follows: Assisted migration means safeguarding biological diversity through the translocation of representatives of a species or population harmed by climate change to an area outside the indigenous range of that unit where it would be predicted to move as climate changes, were it not for anthropogenic dispersal barriers or lack of time. The differences between assisted migration and other conservation translocations are also discussed. A wide adoption of the clear and distinctive term and definition provided would allow more focused research on the topic and enable consistent implementation as practitioners could have the same understanding of the concept.

  3. Axisymmetric Implementation for 3D-Based DSMC Codes

    NASA Technical Reports Server (NTRS)

    Stewart, Benedicte; Lumpkin, F. E.; LeBeau, G. J.

    2011-01-01

    The primary objective in developing NASA s DSMC Analysis Code (DAC) was to provide a high fidelity modeling tool for 3D rarefied flows such as vacuum plume impingement and hypersonic re-entry flows [1]. The initial implementation has been expanded over time to offer other capabilities including a novel axisymmetric implementation. Because of the inherently 3D nature of DAC, this axisymmetric implementation uses a 3D Cartesian domain and 3D surfaces. Molecules are moved in all three dimensions but their movements are limited by physical walls to a small wedge centered on the plane of symmetry (Figure 1). Unfortunately, far from the axis of symmetry, the cell size in the direction perpendicular to the plane of symmetry (the Z-direction) may become large compared to the flow mean free path. This frequently results in inaccuracies in these regions of the domain. A new axisymmetric implementation is presented which aims to solve this issue by using Bird s approach for the molecular movement while preserving the 3D nature of the DAC software [2]. First, the computational domain is similar to that previously used such that a wedge must still be used to define the inflow surface and solid walls within the domain. As before molecules are created inside the inflow wedge triangles but they are now rotated back to the symmetry plane. During the move step, molecules are moved in 3D but instead of interacting with the wedge walls, the molecules are rotated back to the plane of symmetry at the end of the move step. This new implementation was tested for multiple flows over axisymmetric shapes, including a sphere, a cone, a double cone and a hollow cylinder. Comparisons to previous DSMC solutions and experiments, when available, are made.

  4. Moving object detection via low-rank total variation regularization

    NASA Astrophysics Data System (ADS)

    Wang, Pengcheng; Chen, Qian; Shao, Na

    2016-09-01

    Moving object detection is a challenging task in video surveillance. Recently proposed Robust Principal Component Analysis (RPCA) can recover the outlier patterns from the low-rank data under some mild conditions. However, the l-penalty in RPCA doesn't work well in moving object detection because the irrepresentable condition is often not satisfied. In this paper, a method based on total variation (TV) regularization scheme is proposed. In our model, image sequences captured with a static camera are highly related, which can be described using a low-rank matrix. Meanwhile, the low-rank matrix can absorb background motion, e.g. periodic and random perturbation. The foreground objects in the sequence are usually sparsely distributed and drifting continuously, and can be treated as group outliers from the highly-related background scenes. Instead of l-penalty, we exploit the total variation of the foreground. By minimizing the total variation energy, the outliers tend to collapse and finally converge to be the exact moving objects. The TV-penalty is superior to the l-penalty especially when the outlier is in the majority for some pixels, and our method can estimate the outlier explicitly with less bias but higher variance. To solve the problem, a joint optimization function is formulated and can be effectively solved through the inexact Augmented Lagrange Multiplier (ALM) method. We evaluate our method along with several state-of-the-art approaches in MATLAB. Both qualitative and quantitative results demonstrate that our proposed method works effectively on a large range of complex scenarios.

  5. Transient inactivation of the anterior cingulate cortex in rats disrupts avoidance of a dynamic object.

    PubMed

    Svoboda, Jan; Lobellová, Veronika; Popelíková, Anna; Ahuja, Nikhil; Kelemen, Eduard; Stuchlík, Aleš

    2017-03-01

    Although animals often learn and monitor the spatial properties of relevant moving objects such as conspecifics and predators to properly organize their own spatial behavior, the underlying brain substrate has received little attention and hence remains elusive. Because the anterior cingulate cortex (ACC) participates in conflict monitoring and effort-based decision making, and ACC neurons respond to objects in the environment, it may also play a role in the monitoring of moving cues and exerting the appropriate spatial response. We used a robot avoidance task in which a rat had to maintain at least a 25cm distance from a small programmable robot to avoid a foot shock. In successive sessions, we trained ten Long Evans male rats to avoid a fast-moving robot (4cm/s), a stationary robot, and a slow-moving robot (1cm/s). In each condition, the ACC was transiently inactivated by bilateral injections of muscimol in the penultimate session and a control saline injection was given in the last session. Compared to the corresponding saline session, ACC-inactivated rats received more shocks when tested in the fast-moving condition, but not in the stationary or slow robot conditions. Furthermore, ACC-inactivated rats less frequently responded to an approaching robot with appropriate escape responses although their response to shock stimuli remained preserved. Since we observed no effect on slow or stationary robot avoidance, we conclude that the ACC may exert cognitive efforts for monitoring dynamic updating of the position of an object, a role complementary to the dorsal hippocampus. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Three-dimensional local ALE-FEM method for fluid flow in domains containing moving boundaries/objects interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrington, David Bradley; Monayem, A. K. M.; Mazumder, H.

    2015-03-05

    A three-dimensional finite element method for the numerical simulations of fluid flow in domains containing moving rigid objects or boundaries is developed. The method falls into the general category of Arbitrary Lagrangian Eulerian methods; it is based on a fixed mesh that is locally adapted in the immediate vicinity of the moving interfaces and reverts to its original shape once the moving interfaces go past the elements. The moving interfaces are defined by separate sets of marker points so that the global mesh is independent of interface movement and the possibility of mesh entanglement is eliminated. The results is amore » fully robust formulation capable of calculating on domains of complex geometry with moving boundaries or devises that can also have a complex geometry without danger of the mesh becoming unsuitable due to its continuous deformation thus eliminating the need for repeated re-meshing and interpolation. Moreover, the boundary conditions on the interfaces are imposed exactly. This work is intended to support the internal combustion engines simulator KIVA developed at Los Alamos National Laboratories. The model's capabilities are illustrated through application to incompressible flows in different geometrical settings that show the robustness and flexibility of the technique to perform simulations involving moving boundaries in a three-dimensional domain.« less

  7. Captive Bottlenose Dolphins (Tursiops truncatus) Spontaneously Using Water Flow to Manipulate Objects

    PubMed Central

    Yamamoto, Chisato; Furuta, Keisuke; Taki, Michihiro; Morisaka, Tadamichi

    2014-01-01

    Several terrestrial animals and delphinids manipulate objects in a tactile manner, using parts of their bodies, such as their mouths or hands. In this paper, we report that bottlenose dolphins (Tursiops truncatus) manipulate objects not by direct bodily contact, but by spontaneous water flow. Three of four dolphins at Suma Aqualife Park performed object manipulation with food. The typical sequence of object manipulation consisted of a three step procedure. First, the dolphins released the object from the sides of their mouths while assuming a head-down posture near the floor. They then manipulated the object around their mouths and caught it. Finally, they ceased to engage in their head-down posture and started to swim. When the dolphins moved the object, they used the water current in the pool or moved their head. These results showed that dolphins manipulate objects using movements that do not directly involve contact between a body part and the object. In the event the dolphins dropped the object on the floor, they lifted it by making water flow in one of three methods: opening and closing their mouths repeatedly, moving their heads lengthwise, or making circular head motions. This result suggests that bottlenose dolphins spontaneously change their environment to manipulate objects. The reason why aquatic animals like dolphins do object manipulation by changing their environment but terrestrial animals do not may be that the viscosity of the aquatic environment is much higher than it is in terrestrial environments. This is the first report thus far of any non-human mammal engaging in object manipulation using several methods to change their environment. PMID:25250625

  8. Investigation of an EMI sensor for detection of large metallic objects in the presence of metallic clutter

    NASA Astrophysics Data System (ADS)

    Black, Christopher; McMichael, Ian; Riggs, Lloyd

    2005-06-01

    Electromagnetic induction (EMI) sensors and magnetometers have successfully detected surface laid, buried, and visually obscured metallic objects. Potential military activities could require detection of these objects at some distance from a moving vehicle in the presence of metallic clutter. Results show that existing EMI sensors have limited range capabilities and suffer from false alarms due to clutter. This paper presents results of an investigation of an EMI sensor designed for detecting large metallic objects on a moving platform in a high clutter environment. The sensor was developed by the U.S. Army RDECOM CERDEC NVESD in conjunction with the Johns Hopkins University Applied Physics Laboratory.

  9. Perceiving environmental structure from optical motion

    NASA Technical Reports Server (NTRS)

    Lappin, Joseph S.

    1991-01-01

    Generally speaking, one of the most important sources of optical information about environmental structure is known to be the deforming optical patterns produced by the movements of the observer (pilot) or environmental objects. As an observer moves through a rigid environment, the projected optical patterns of environmental objects are systematically transformed according to their orientations and positions in 3D space relative to those of the observer. The detailed characteristics of these deforming optical patterns carry information about the 3D structure of the objects and about their locations and orientations relative to those of the observer. The specific geometrical properties of moving images that may constitute visually detected information about the shapes and locations of environmental objects is examined.

  10. Evaluation of multiple-frequency, active and passive acoustics as surrogates for bedload transport

    USGS Publications Warehouse

    Wood, Molly S.; Fosness, Ryan L.; Pachman, Gregory; Lorang, Mark; Tonolla, Diego

    2015-01-01

    The use of multiple-frequency, active acoustics through deployment of acoustic Doppler current profilers (ADCPs) shows potential for estimating bedload in selected grain size categories. The U.S. Geological Survey (USGS), in cooperation with the University of Montana (UM), evaluated the use of multiple-frequency, active and passive acoustics as surrogates for bedload transport during a pilot study on the Kootenai River, Idaho, May 17-18, 2012. Four ADCPs with frequencies ranging from 600 to 2000 kHz were used to measure apparent moving bed velocities at 20 stations across the river in conjunction with physical bedload samples. Additionally, UM scientists measured the sound frequencies of moving particles with two hydrophones, considered passive acoustics, along longitudinal transects in the study reach. Some patterns emerged in the preliminary analysis which show promise for future studies. Statistically significant relations were successfully developed between apparent moving bed velocities measured by ADCPs with frequencies 1000 and 1200 kHz and bedload in 0.5 to 2.0 mm grain size categories. The 600 kHz ADCP seemed somewhat sensitive to the movement of gravel bedload in the size range 8.0 to 31.5 mm, but the relation was not statistically significant. The passive hydrophone surveys corroborated the sample results and could be used to map spatial variability in bedload transport and to select a measurement cross-section with moving bedload for active acoustic surveys and physical samples.

  11. A Novel Method of Localization for Moving Objects with an Alternating Magnetic Field

    PubMed Central

    Gao, Xiang; Yan, Shenggang; Li, Bin

    2017-01-01

    Magnetic detection technology has wide applications in the fields of geological exploration, biomedical treatment, wreck removal and localization of unexploded ordinance. A large number of methods have been developed to locate targets with static magnetic fields, however, the relation between the problem of localization of moving objectives with alternating magnetic fields and the localization with a static magnetic field is rarely studied. A novel method of target localization based on coherent demodulation was proposed in this paper. The problem of localization of moving objects with an alternating magnetic field was transformed into the localization with a static magnetic field. The Levenberg-Marquardt (L-M) algorithm was applied to calculate the position of the target with magnetic field data measured by a single three-component magnetic sensor. Theoretical simulation and experimental results demonstrate the effectiveness of the proposed method. PMID:28430153

  12. Moving shadows contribute to the corridor illusion in a chimpanzee (Pan troglodytes).

    PubMed

    Imura, Tomoko; Tomonaga, Masaki

    2009-08-01

    Previous studies have reported that backgrounds depicting linear perspective and texture gradients influence relative size discrimination in nonhuman animals (known as the "corridor illusion"), but research has not yet identified the other kinds of depth cues contributing to the corridor illusion. This study examined the effects of linear perspective and shadows on the responses of a chimpanzee (Pan troglodytes) to the corridor illusion. The performance of the chimpanzee was worse when a smaller object was presented at the farther position on a background reflecting a linear perspective, implying that the corridor illusion was replicated in the chimpanzee (Imura, Tomonaga, & Yagi, 2008). The extent of the illusion changed as a function of the position of the shadows cast by the objects only when the shadows were moving in synchrony with the objects. These findings suggest that moving shadows and linear perspective contributed to the corridor illusion in a chimpanzee. Copyright 2009 APA, all rights reserved.

  13. Linkage of additional contents to moving objects and video shots in a generic media framework for interactive television

    NASA Astrophysics Data System (ADS)

    Lopez, Alejandro; Noe, Miquel; Fernandez, Gabriel

    2004-10-01

    The GMF4iTV project (Generic Media Framework for Interactive Television) is an IST European project that consists of an end-to-end broadcasting platform providing interactivity on heterogeneous multimedia devices such as Set-Top-Boxes and PCs according to the Multimedia Home Platform (MHP) standard from DVB. This platform allows the content providers to create enhanced audiovisual contents with a degree of interactivity at moving object level or shot change from a video. The end user is then able to interact with moving objects from the video or individual shots allowing the enjoyment of additional contents associated to them (MHP applications, HTML pages, JPEG, MPEG4 files...). This paper focus the attention to the issues related to metadata and content transmission, synchronization, signaling and bitrate allocation of the GMF4iTV project.

  14. Eye tracking a self-moved target with complex hand-target dynamics

    PubMed Central

    Landelle, Caroline; Montagnini, Anna; Madelain, Laurent

    2016-01-01

    Previous work has shown that the ability to track with the eye a moving target is substantially improved when the target is self-moved by the subject's hand compared with when being externally moved. Here, we explored a situation in which the mapping between hand movement and target motion was perturbed by simulating an elastic relationship between the hand and target. Our objective was to determine whether the predictive mechanisms driving eye-hand coordination could be updated to accommodate this complex hand-target dynamics. To fully appreciate the behavioral effects of this perturbation, we compared eye tracking performance when self-moving a target with a rigid mapping (simple) and a spring mapping as well as when the subject tracked target trajectories that he/she had previously generated when using the rigid or spring mapping. Concerning the rigid mapping, our results confirmed that smooth pursuit was more accurate when the target was self-moved than externally moved. In contrast, with the spring mapping, eye tracking had initially similar low spatial accuracy (though shorter temporal lag) in the self versus externally moved conditions. However, within ∼5 min of practice, smooth pursuit improved in the self-moved spring condition, up to a level similar to the self-moved rigid condition. Subsequently, when the mapping unexpectedly switched from spring to rigid, the eye initially followed the expected target trajectory and not the real one, thereby suggesting that subjects used an internal representation of the new hand-target dynamics. Overall, these results emphasize the stunning adaptability of smooth pursuit when self-maneuvering objects with complex dynamics. PMID:27466129

  15. Vision-based object detection and recognition system for intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Ran, Bin; Liu, Henry X.; Martono, Wilfung

    1999-01-01

    Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.

  16. Differential responses in dorsal visual cortex to motion and disparity depth cues

    PubMed Central

    Arnoldussen, David M.; Goossens, Jeroen; van den Berg, Albert V.

    2013-01-01

    We investigated how interactions between monocular motion parallax and binocular cues to depth vary in human motion areas for wide-field visual motion stimuli (110 × 100°). We used fMRI with an extensive 2 × 3 × 2 factorial blocked design in which we combined two types of self-motion (translational motion and translational + rotational motion), with three categories of motion inflicted by the degree of noise (self-motion, distorted self-motion, and multiple object-motion), and two different view modes of the flow patterns (stereo and synoptic viewing). Interactions between disparity and motion category revealed distinct contributions to self- and object-motion processing in 3D. For cortical areas V6 and CSv, but not the anterior part of MT+ with bilateral visual responsiveness (MT+/b), we found a disparity-dependent effect of rotational flow and noise: When self-motion perception was degraded by adding rotational flow and moderate levels of noise, the BOLD responses were reduced compared with translational self-motion alone, but this reduction was cancelled by adding stereo information which also rescued the subject's self-motion percept. At high noise levels, when the self-motion percept gave way to a swarm of moving objects, the BOLD signal strongly increased compared to self-motion in areas MT+/b and V6, but only for stereo in the latter. BOLD response did not increase for either view mode in CSv. These different response patterns indicate different contributions of areas V6, MT+/b, and CSv to the processing of self-motion perception and the processing of multiple independent motions. PMID:24339808

  17. Three-dimensional multiple object tracking in the pediatric population: the NeuroTracker and its promising role in the management of mild traumatic brain injury.

    PubMed

    Corbin-Berrigan, Laurie-Ann; Kowalski, Kristina; Faubert, Jocelyn; Christie, Brian; Gagnon, Isabelle

    2018-05-02

    As mild traumatic brain injury (mTBI) affects hundreds of thousands of children and their families each year, investigation of potential mTBI assessments and treatments is an important research target. Three-dimensional multiple object tracking (3D-MOT), where an individual must allocate attention to moving objects within 3D space, is one potentially promising assessment and treatment tool. To date, no research has looked at 3D-MOT in a pediatric mTBI population. Thus, the aim of this study was to examine 3D-MOT learning in children and youth with and without mTBI. Thirty-four participants (mean age=14.69±2.46 years), with and without mTBI, underwent six visits of 3D-MOT. A two-way repeated-measures analysis of variance (ANOVA) showed a significant time effect, a nonsignificant group effect, and a nonsignificant group-by-time interaction on absolute speed thresholds. In contrast, significant group and time effects and a significant group-by-time interaction on normalized speed thresholds were found. Individuals with mTBI showed smaller training gains at visit 2 than healthy controls, but the groups did not differ on the remaining visits. Although youth can significantly improve their 3D-MOT performance following mTBI, similar to noninjured individuals, they show slower speed of processing in the first few training sessions. This preliminary work suggests that using a 3D-MOT paradigm to train visual perception after mTBI may be beneficial for both stimulating recovery and informing return to activity decisions.

  18. Magnetic levitation system for moving objects

    DOEpatents

    Post, R.F.

    1998-03-03

    Repelling magnetic forces are produced by the interaction of a flux-concentrated magnetic field (produced by permanent magnets or electromagnets) with an inductively loaded closed electric circuit. When one such element moves with respect to the other, a current is induced in the circuit. This current then interacts back on the field to produce a repelling force. These repelling magnetic forces are applied to magnetically levitate a moving object such as a train car. The power required to levitate a train of such cars is drawn from the motional energy of the train itself, and typically represents only a percent or two of the several megawatts of power required to overcome aerodynamic drag at high speeds. 7 figs.

  19. Magnetic levitation system for moving objects

    DOEpatents

    Post, Richard F.

    1998-01-01

    Repelling magnetic forces are produced by the interaction of a flux-concentrated magnetic field (produced by permanent magnets or electromagnets) with an inductively loaded closed electric circuit. When one such element moves with respect to the other, a current is induced in the circuit. This current then interacts back on the field to produce a repelling force. These repelling magnetic forces are applied to magnetically levitate a moving object such as a train car. The power required to levitate a train of such cars is drawn from the motional energy of the train itself, and typically represents only a percent or two of the several megawatts of power required to overcome aerodynamic drag at high speeds.

  20. Synchronizing Self and Object Movement: How Child and Adult Cyclists Intercept Moving Gaps in a Virtual Environment

    ERIC Educational Resources Information Center

    Chihak, Benjamin J.; Plumert, Jodie M.; Ziemer, Christine J.; Babu, Sabarish; Grechkin, Timofey; Cremer, James F.; Kearney, Joseph K.

    2010-01-01

    Two experiments examined how 10- and 12-year-old children and adults intercept moving gaps while bicycling in an immersive virtual environment. Participants rode an actual bicycle along a virtual roadway. At 12 test intersections, participants attempted to pass through a gap between 2 moving, car-sized blocks without stopping. The blocks were…

  1. Rhetorical Moves in Problem Statement Section of Iranian EFL Postgraduate Students' Theses

    ERIC Educational Resources Information Center

    Nimehchisalem, Vahid; Tarvirdizadeh, Zahra; Paidary, Sara Sayed; Binti Mat Hussin, Nur Izyan Syamimi

    2016-01-01

    The Problem Statement (PS) section of a thesis, usually a subsection of the first chapter, is supposed to justify the objectives of the study. Postgraduate students are often ignorant of the rhetorical moves that they are expected to make in their PS. This descriptive study aimed to explore the rhetorical moves of the PS in Iranian master's (MA)…

  2. Monitoring Moving Queries inside a Safe Region

    PubMed Central

    Al-Khalidi, Haidar; Taniar, David; Alamri, Sultan

    2014-01-01

    With mobile moving range queries, there is a need to recalculate the relevant surrounding objects of interest whenever the query moves. Therefore, monitoring the moving query is very costly. The safe region is one method that has been proposed to minimise the communication and computation cost of continuously monitoring a moving range query. Inside the safe region the set of objects of interest to the query do not change; thus there is no need to update the query while it is inside its safe region. However, when the query leaves its safe region the mobile device has to reevaluate the query, necessitating communication with the server. Knowing when and where the mobile device will leave a safe region is widely known as a difficult problem. To solve this problem, we propose a novel method to monitor the position of the query over time using a linear function based on the direction of the query obtained by periodic monitoring of its position. Periodic monitoring ensures that the query is aware of its location all the time. This method reduces the costs associated with communications in client-server architecture. Computational results show that our method is successful in handling moving query patterns. PMID:24696652

  3. Internal model of gravity for hand interception: parametric adaptation to zero-gravity visual targets on Earth.

    PubMed

    Zago, Myrka; Lacquaniti, Francesco

    2005-08-01

    Internal model is a neural mechanism that mimics the dynamics of an object for sensory motor or cognitive functions. Recent research focuses on the issue of whether multiple internal models are learned and switched to cope with a variety of conditions, or single general models are adapted by tuning the parameters. Here we addressed this issue by investigating how the manual interception of a moving target changes with changes of the visual environment. In our paradigm, a virtual target moves vertically downward on a screen with different laws of motion. Subjects are asked to punch a hidden ball that arrives in synchrony with the visual target. By using several different protocols, we systematically found that subjects do not develop a new internal model appropriate for constant speed targets, but they use the default gravity model and reduce the central processing time. The results imply that adaptation to zero-gravity targets involves a compression of temporal processing through the cortical and subcortical regions interconnected with the vestibular cortex, which has previously been shown to be the site of storage of the internal model of gravity.

  4. Seal Investigations of an Active Clearance Control System Concept

    NASA Technical Reports Server (NTRS)

    Steinetz, Bruce M.; Taylor, Shawn; Oswald, Jay; DeCastro, Jonathan A.

    2006-01-01

    In an effort to improve upon current thermal active clearance control methods, a first generation, fast-acting mechanically actuated, active clearance control system has been designed and installed into a non-rotating test rig. In order to harvest the benefit of tighter blade tip clearances, low-leakage seals are required for the actuated carrier segments of the seal shroud to prevent excessive leakage of compressor discharge (P3) cooling air. The test rig was designed and fabricated to facilitate the evaluation of these types of seals, identify seal leakage sources, and test other active clearance control system concepts. The objective of this paper is to present both experimental and analytical investigations into the nature of the face-seal to seal-carrier interface. Finite element analyses were used to examine face seal contact pressures and edge-loading under multiple loading conditions, varied E-seal positions and two new face seal heights. The analyses indicated that moving the E-seal inward radially and reducing face seal height would lead to more uniform contact conditions between the face seal and the carriers. Lab testing confirmed that moving the balance diameter inward radially caused a decrease in overall system leakage.

  5. Epidemiology of mixed martial arts and youth violence in an ethnically diverse sample.

    PubMed

    Hishinuma, Earl S; Umemoto, Karen N; Nguyen, Toan Gia; Chang, Janice Y; Bautista, Randy Paul M

    2012-01-01

    Mixed martial arts' (MMAs) growing international popularity has rekindled the discussion on the advantages (e.g., exercise) and disadvantages (e.g., possible injury) of contact sports. This study was the first of its kind to examine the psychosocial aspects of MMA and youth violence using an epidemiologic approach with an Asian American and Pacific Islander (AAPI) adolescent sample (N = 881). The results were consistent with the increased popularity of MMA with 52% (adolescent males = 73%, adolescent females = 39%) enjoying watching MMA and 24% (adolescent males = 39%, adolescent females = 13%) practicing professional fight moves with friends. Although statistically significant ethnic differences were found for the two MMA items on a bivariate level, these findings were not statistically significant when considering other variables in the model. The bivariate results revealed a cluster of risk-protective factors. Regarding the multiple regression findings, although enjoying watching MMA remained associated with positive attitudes toward violence and practicing fight moves remained associated with negative out-group orientation, the MMA items were not associated with unique variances of youth violence perpetration and victimization. Implications included the need for further research that includes other diverse samples, more comprehensive and objective MMA and violence measures, and observational and intervention longitudinal studies.

  6. Semi-Autonomous Vehicle Project

    NASA Technical Reports Server (NTRS)

    Stewart, Christopher

    2016-01-01

    The primary objective this summer is "evaluating standards for wireless architecture for the internet of things". The Internet of Things is the network of physical objects or "things" embedded with electronics, software, sensors and network connectivity which enables these objects to collect and exchange data and make decisions based on said data. This was accomplished by creating a semi-autonomous vehicle that takes advantage of multiple sensors, cameras, and onboard computers and combined them with a mesh network which enabled communication across large distances with little to no interruption. The mesh network took advantage of what is known as DTN - Disruption Tolerant Networking which according to NASA is the new communications protocol that is "the first step towards interplanetary internet." The use of DTN comes from the fact that it will store information if an interruption in communications is detected and even forward that information via other relays within range so that the data is not lost. This translates well into the project because as the car moves further away from whatever is sending it commands (in this case a joystick), the information can still be forwarded to the car with little to no loss of information thanks to the mesh nodes around the driving area.

  7. Independent synchronized control and visualization of interactions between living cells and organisms.

    PubMed

    Rouger, Vincent; Bordet, Guillaume; Couillault, Carole; Monneret, Serge; Mailfert, Sébastien; Ewbank, Jonathan J; Pujol, Nathalie; Marguet, Didier

    2014-05-20

    To investigate the early stages of cell-cell interactions occurring between living biological samples, imaging methods with appropriate spatiotemporal resolution are required. Among the techniques currently available, those based on optical trapping are promising. Methods to image trapped objects, however, in general suffer from a lack of three-dimensional resolution, due to technical constraints. Here, we have developed an original setup comprising two independent modules: holographic optical tweezers, which offer a versatile and precise way to move multiple objects simultaneously but independently, and a confocal microscope that provides fast three-dimensional image acquisition. The optical decoupling of these two modules through the same objective gives users the possibility to easily investigate very early steps in biological interactions. We illustrate the potential of this setup with an analysis of infection by the fungus Drechmeria coniospora of different developmental stages of Caenorhabditis elegans. This has allowed us to identify specific areas on the nematode's surface where fungal spores adhere preferentially. We also quantified this adhesion process for different mutant nematode strains, and thereby derive insights into the host factors that mediate fungal spore adhesion. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  8. Lean and Efficient Software: Whole Program Optimization of Executables

    DTIC Science & Technology

    2016-12-31

    format string “ baked in”? (If multiple printf calls pass the same format string, they could share the same new function.) This leads to the...format string becomes baked into the target function.  Moving down: o Moving from the first row to the second makes any potential user control of the

  9. SU-G-BRA-14: Dose in a Rigidly Moving Phantom with Jaw and MLC Compensation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, E; Lucas, D

    Purpose: To validate dose calculation for a rigidly moving object with jaw motion and MLC shifts to compensate for the motion in a TomoTherapy™ treatment delivery. Methods: An off-line version of the TomoTherapy dose calculator was extended to perform dose calculations for rigidly moving objects. A variety of motion traces were added to treatment delivery plans, along with corresponding jaw compensation and MLC shift compensation profiles. Jaw compensation profiles were calculated by shifting the jaws such that the center of the treatment beam moved by an amount equal to the motion in the longitudinal direction. Similarly, MLC compensation profiles weremore » calculated by shifting the MLC leaves by an amount that most closely matched the motion in the transverse direction. The same jaw and MLC compensation profiles were used during simulated treatment deliveries on a TomoTherapy system, and film measurements were obtained in a rigidly moving phantom. Results: The off-line TomoTherapy dose calculator accurately predicted dose profiles for a rigidly moving phantom along with jaw motion and MLC shifts to compensate for the motion. Calculations matched film measurements to within 2%/1 mm. Jaw and MLC compensation substantially reduced the discrepancy between the delivered dose distribution and the calculated dose with no motion. For axial motion, the compensated dose matched the no-motion dose within 2%/1mm. For transverse motion, the dose matched within 2%/3mm (approximately half the width of an MLC leaf). Conclusion: The off-line TomoTherapy dose calculator accurately computes dose delivered to a rigidly moving object, and accurately models the impact of moving the jaws and shifting the MLC leaf patterns to compensate for the motion. Jaw tracking and MLC leaf shifting can effectively compensate for the dosimetric impact of motion during a TomoTherapy treatment delivery.« less

  10. Activation of the Human MT Complex by Motion in Depth Induced by a Moving Cast Shadow

    PubMed Central

    Katsuyama, Narumi; Usui, Nobuo; Taira, Masato

    2016-01-01

    A moving cast shadow is a powerful monocular depth cue for motion perception in depth. For example, when a cast shadow moves away from or toward an object in a two-dimensional plane, the object appears to move toward or away from the observer in depth, respectively, whereas the size and position of the object are constant. Although the cortical mechanisms underlying motion perception in depth by cast shadow are unknown, the human MT complex (hMT+) is likely involved in the process, as it is sensitive to motion in depth represented by binocular depth cues. In the present study, we examined this possibility by using a functional magnetic resonance imaging (fMRI) technique. First, we identified the cortical regions sensitive to the motion of a square in depth represented via binocular disparity. Consistent with previous studies, we observed significant activation in the bilateral hMT+, and defined functional regions of interest (ROIs) there. We then investigated the activity of the ROIs during observation of the following stimuli: 1) a central square that appeared to move back and forth via a moving cast shadow (mCS); 2) a segmented and scrambled cast shadow presented beside the square (sCS); and 3) no cast shadow (nCS). Participants perceived motion of the square in depth in the mCS condition only. The activity of the hMT+ was significantly higher in the mCS compared with the sCS and nCS conditions. Moreover, the hMT+ was activated equally in both hemispheres in the mCS condition, despite presentation of the cast shadow in the bottom-right quadrant of the stimulus. Perception of the square moving in depth across visual hemifields may be reflected in the bilateral activation of the hMT+. We concluded that the hMT+ is involved in motion perception in depth induced by moving cast shadow and by binocular disparity. PMID:27597999

  11. Automated multiple target detection and tracking in UAV videos

    NASA Astrophysics Data System (ADS)

    Mao, Hongwei; Yang, Chenhui; Abousleman, Glen P.; Si, Jennie

    2010-04-01

    In this paper, a novel system is presented to detect and track multiple targets in Unmanned Air Vehicles (UAV) video sequences. Since the output of the system is based on target motion, we first segment foreground moving areas from the background in each video frame using background subtraction. To stabilize the video, a multi-point-descriptor-based image registration method is performed where a projective model is employed to describe the global transformation between frames. For each detected foreground blob, an object model is used to describe its appearance and motion information. Rather than immediately classifying the detected objects as targets, we track them for a certain period of time and only those with qualified motion patterns are labeled as targets. In the subsequent tracking process, a Kalman filter is assigned to each tracked target to dynamically estimate its position in each frame. Blobs detected at a later time are used as observations to update the state of the tracked targets to which they are associated. The proposed overlap-rate-based data association method considers the splitting and merging of the observations, and therefore is able to maintain tracks more consistently. Experimental results demonstrate that the system performs well on real-world UAV video sequences. Moreover, careful consideration given to each component in the system has made the proposed system feasible for real-time applications.

  12. Accuracy and precision of smartphone applications and commercially available motion sensors in multiple sclerosis

    PubMed Central

    Balto, Julia M; Kinnett-Hopkins, Dominique L

    2016-01-01

    Background There is increased interest in the application of smartphone applications and wearable motion sensors among multiple sclerosis (MS) patients. Objective This study examined the accuracy and precision of common smartphone applications and motion sensors for measuring steps taken by MS patients while walking on a treadmill. Methods Forty-five MS patients (Expanded Disability Status Scale (EDSS) = 1.0–5.0) underwent two 500-step walking trials at comfortable walking speed on a treadmill. Participants wore five motion sensors: the Digi-Walker SW-200 pedometer (Yamax), the UP2 and UP Move (Jawbone), and the Flex and One (Fitbit). The smartphone applications were Health (Apple), Health Mate (Withings), and Moves (ProtoGeo Oy). Results The Fitbit One had the best absolute (mean = 490.6 steps, 95% confidence interval (CI) = 485.6–495.5 steps) and relative accuracy (1.9% error), and absolute (SD = 16.4) and relative precision (coefficient of variation (CV) = 0.0), for the first 500-step walking trial; this was repeated with the second trial. Relative accuracy was correlated with slower walking speed for the first (rs = −.53) and second (rs = −.53) trials. Conclusion The results suggest that the waist-worn Fitbit One is the most precise and accurate sensor for measuring steps when walking on a treadmill, but future research is needed (testing the device across a broader range of disability, at different speeds, and in real-life walking conditions) before inclusion in clinical research and practice with MS patients. PMID:28607720

  13. Sensorimotor Synchronization with Different Metrical Levels of Point-Light Dance Movements.

    PubMed

    Su, Yi-Huang

    2016-01-01

    Rhythm perception and synchronization have been extensively investigated in the auditory domain, as they underlie means of human communication such as music and speech. Although recent studies suggest comparable mechanisms for synchronizing with periodically moving visual objects, the extent to which it applies to ecologically relevant information, such as the rhythm of complex biological motion, remains unknown. The present study addressed this issue by linking rhythm of music and dance in the framework of action-perception coupling. As a previous study showed that observers perceived multiple metrical periodicities in dance movements that embodied this structure, the present study examined whether sensorimotor synchronization (SMS) to dance movements resembles what is known of auditory SMS. Participants watched a point-light figure performing two basic steps of Swing dance cyclically, in which the trunk bounced at every beat and the limbs moved at every second beat, forming two metrical periodicities. Participants tapped synchronously to the bounce of the trunk with or without the limbs moving in the stimuli (Experiment 1), or tapped synchronously to the leg movements with or without the trunk bouncing simultaneously (Experiment 2). Results showed that, while synchronization with the bounce (lower-level pulse) was not influenced by the presence or absence of limb movements (metrical accent), synchronization with the legs (beat) was improved by the presence of the bounce (metrical subdivision) across different movement types. The latter finding parallels the "subdivision benefit" often demonstrated in auditory tasks, suggesting common sensorimotor mechanisms for visual rhythms in dance and auditory rhythms in music.

  14. Core outcome measures for exercise studies in people with multiple sclerosis: recommendations from a multidisciplinary consensus meeting.

    PubMed

    Paul, Lorna; Coote, Susan; Crosbie, Jean; Dixon, Diane; Hale, Leigh; Holloway, Ed; McCrone, Paul; Miller, Linda; Saxton, John; Sincock, Caroline; White, Lesley

    2014-10-01

    Evidence shows that exercise is beneficial for people with multiple sclerosis (MS); however, statistical pooling of data is difficult because of the diversity of outcome measures used. The objective of this review is to report the recommendations of an International Consensus Meeting for a core set of outcome measures for use in exercise studies in MS. From the 100 categories of the International Classification of Function Core Sets for MS, 57 categories were considered as likely/potentially likely to be affected by exercise and were clustered into seven core groups. Outcome measures to address each group were evaluated regarding, for example, psychometric properties. The following are recommended: Modified Fatigue Impact Scale (MFIS) or Fatigue Severity Scale (FSS) for energy and drive, 6-Minute Walk Test (6MWT) for exercise tolerance, Timed Up and Go (TUG) for muscle function and moving around, Multiple Sclerosis Impact Scale (MSIS-29) or Multiple Sclerosis Quality of Life-54 Instrument (MSQoL54) for quality of life and body mass index (BMI) or waist-hip ratio (WHR) for the health risks associated with excess body fat. A cost effectiveness analysis and qualitative evaluation should be included where possible. Using these core measures ensures that future meta-analyses of exercise studies in MS are more robust and thus more effectively inform practice. © The Author(s) 2014.

  15. Perceived and objective entrance-related environmental barriers and daily out-of-home mobility in community-dwelling older people.

    PubMed

    Portegijs, Erja; Rantakokko, Merja; Viljanen, Anne; Rantanen, Taina; Iwarsson, Susanne

    We studied whether entrance-related environmental barriers, perceived and objectively recorded, were associated with moving out-of-home daily in older people with and without limitations in lower extremity performance. Cross-sectional analyses of the "Life-space mobility in old age" cohort including 848 community-dwelling 75-90-year-old of central Finland. Participants reported their frequency of moving out-of-home (daily vs. 0-6 times/week) and perceived entrance-related environmental barriers (yes/no). Lower extremity performance was assessed (Short Physical Performance Battery) and categorized as poorer (score 0-9) or good (score 10-12). Environmental barriers at entrances and in exterior surroundings were objectively registered (Housing Enabler screening tool) and divided into tertiles. Logistic regression analyses were adjusted for age, sex, number of chronic diseases, cognitive function, month of assessment, type of neighborhood, and years lived in the current home. At home entrances a median of 6 and in the exterior surroundings 5 environmental barriers were objectively recorded, and 20% of the participants perceived entrance-related barriers. The odds for moving out-of-home less than daily increased when participants perceived entrance-related barrier(s) or when they lived in homes with higher numbers of objectively recorded environmental barriers at entrances. Participants with limitations in lower extremity performance were more susceptible to these environmental barriers. Objectively recorded environmental barriers in the exterior surroundings did not compromise out-of-home mobility. Entrance-related environmental barriers may hinder community-dwelling older people to move out-of-home daily especially when their functional capacity is compromised. Potentially, reducing entrance-related barriers may help to prevent confinement to the home. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Photonic Doppler velocimetry lens array probe incorporating stereo imaging

    DOEpatents

    Malone, Robert M.; Kaufman, Morris I.

    2015-09-01

    A probe including a multiple lens array is disclosed to measure velocity distribution of a moving surface along many lines of sight. Laser light, directed to the moving surface is reflected back from the surface and is Doppler shifted, collected into the array, and then directed to detection equipment through optic fibers. The received light is mixed with reference laser light and using photonic Doppler velocimetry, a continuous time record of the surface movement is obtained. An array of single-mode optical fibers provides an optic signal to the multiple lens array. Numerous fibers in a fiber array project numerous rays to establish many measurement points at numerous different locations. One or more lens groups may be replaced with imaging lenses so a stereo image of the moving surface can be recorded. Imaging a portion of the surface during initial travel can determine whether the surface is breaking up.

  17. The effects of family, school, and classroom ecologies on changes in children's social competence and emotional and behavioral problems in first grade.

    PubMed

    Hoglund, Wendy L; Leadbeater, Bonnie J

    2004-07-01

    This study tested the independent and interactive influences of classroom (concentrations of peer prosocial behaviors and victimization), family (household moves, mothers' education), and school (proportion of students receiving income assistance) ecologies on changes in children's social competence (e.g., interpersonal skills, leadership abilities), emotional problems (e.g., anxious, withdrawn behaviors), and behavioral problems (e.g., disruptiveness, aggressiveness) in first grade. Higher classroom concentrations of prosocial behaviors and victimization predicted increases in social competence, and greater school disadvantage predicted decreases. Multiple household moves and greater school disadvantage predicted increases in behavioral problems. Multiple household moves and low levels of mothers' education predicted increases in emotional problems for children in classrooms with few prosocial behaviors. Greater school disadvantage predicted increases in emotional problems for children in classrooms with low prosocial behaviors and high victimization. Policy implications of these findings are considered. Copyright 2004 APA, all rights reserved

  18. Binding Objects to Locations: The Relationship between Object Files and Visual Working Memory

    ERIC Educational Resources Information Center

    Hollingworth, Andrew; Rasmussen, Ian P.

    2010-01-01

    The relationship between object files and visual working memory (VWM) was investigated in a new paradigm combining features of traditional VWM experiments (color change detection) and object-file experiments (memory for the properties of moving objects). Object-file theory was found to account for a key component of object-position binding in VWM:…

  19. Method for accurately positioning a device at a desired area of interest

    DOEpatents

    Jones, Gary D.; Houston, Jack E.; Gillen, Kenneth T.

    2000-01-01

    A method for positioning a first device utilizing a surface having a viewing translation stage, the surface being movable between a first position where the viewing stage is in operational alignment with a first device and a second position where the viewing stage is in operational alignment with a second device. The movable surface is placed in the first position and an image is produced with the first device of an identifiable characteristic of a calibration object on the viewing stage. The moveable surface is then placed in the second position and only the second device is moved until an image of the identifiable characteristic in the second device matches the image from the first device. The calibration object is then replaced on the stage of the surface with a test object, and the viewing translation stage is adjusted until the second device images the area of interest. The surface is then moved to the first position where the test object is scanned with the first device to image the area of interest. An alternative embodiment where the devices move is also disclosed.

  20. Simultaneous 3D-vibration measurement using a single laser beam device

    NASA Astrophysics Data System (ADS)

    Brecher, Christian; Guralnik, Alexander; Baümler, Stephan

    2012-06-01

    Today's commercial solutions for vibration measurement and modal analysis are 3D-scanning laser doppler vibrometers, mainly used for open surfaces in the automotive and aerospace industries and the classic three-axial accelerometers in civil engineering, for most industrial applications in manufacturing environments, and particularly for partially closed structures. This paper presents a novel measurement approach using a single laser beam device and optical reflectors to simultaneously perform 3D-dynamic measurement as well as geometry measurement of the investigated object. We show the application of this so called laser tracker for modal testing of structures on a mechanical manufacturing shop floor. A holistic measurement method is developed containing manual reflector placement, semi-automated geometric modeling of investigated objects and fully automated vibration measurement up to 1000 Hz and down to few microns amplitude. Additionally the fast set up dynamic measurement of moving objects using a tracking technique is presented that only uses the device's own functionalities and does neither require a predefined moving path of the target nor an electronic synchronization to the moving object.

  1. Moving Base Simulation of an ASTOVL Lift-Fan Aircraft

    DOT National Transportation Integrated Search

    1995-08-01

    Using a generalized simulation model, a moving-base simulation of a lift-fan : short takeoff/vertical landing fighter aircraft was conducted on the Vertical : Motion Simulator at Ames Research Center. Objectives of the experiment were to : (1)assess ...

  2. The embodied dynamics of perceptual causality: a slippery slope?

    PubMed Central

    Amorim, Michel-Ange; Siegler, Isabelle A.; Baurès, Robin; Oliveira, Armando M.

    2015-01-01

    In Michotte's launching displays, while the launcher (object A) seems to move autonomously, the target (object B) seems to be displaced passively. However, the impression of A actively launching B does not persist beyond a certain distance identified as the “radius of action” of A over B. If the target keeps moving beyond the radius of action, it loses its passivity and seems to move autonomously. Here, we manipulated implied friction by drawing (or not) a surface upon which A and B are traveling, and by varying the inclination of this surface in screen- and earth-centered reference frames. Among 72 participants (n = 52 in Experiment 1; n = 20 in Experiment 2), we show that both physical embodiment of the event (looking straight ahead at a screen displaying the event on a vertical plane vs. looking downwards at the event displayed on a horizontal plane) and contextual information (objects moving along a depicted surface or in isolation) affect interpretation of the event and modulate the radius of action of the launcher. Using classical mechanics equations, we show that representational consistency of friction from radius of action responses emphasizes the embodied nature of frictional force in our cognitive architecture. PMID:25954235

  3. The embodied dynamics of perceptual causality: a slippery slope?

    PubMed

    Amorim, Michel-Ange; Siegler, Isabelle A; Baurès, Robin; Oliveira, Armando M

    2015-01-01

    In Michotte's launching displays, while the launcher (object A) seems to move autonomously, the target (object B) seems to be displaced passively. However, the impression of A actively launching B does not persist beyond a certain distance identified as the "radius of action" of A over B. If the target keeps moving beyond the radius of action, it loses its passivity and seems to move autonomously. Here, we manipulated implied friction by drawing (or not) a surface upon which A and B are traveling, and by varying the inclination of this surface in screen- and earth-centered reference frames. Among 72 participants (n = 52 in Experiment 1; n = 20 in Experiment 2), we show that both physical embodiment of the event (looking straight ahead at a screen displaying the event on a vertical plane vs. looking downwards at the event displayed on a horizontal plane) and contextual information (objects moving along a depicted surface or in isolation) affect interpretation of the event and modulate the radius of action of the launcher. Using classical mechanics equations, we show that representational consistency of friction from radius of action responses emphasizes the embodied nature of frictional force in our cognitive architecture.

  4. Incidents Prediction in Road Junctions Using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Hajji, Tarik; Alami Hassani, Aicha; Ouazzani Jamil, Mohammed

    2018-05-01

    The implementation of an incident detection system (IDS) is an indispensable operation in the analysis of the road traffics. However the IDS may, in no case, represent an alternative to the classical monitoring system controlled by the human eye. The aim of this work is to increase detection and prediction probability of incidents in camera-monitored areas. Knowing that, these areas are monitored by multiple cameras and few supervisors. Our solution is to use Artificial Neural Networks (ANN) to analyze moving objects trajectories on captured images. We first propose a modelling of the trajectories and their characteristics, after we develop a learning database for valid and invalid trajectories, and then we carry out a comparative study to find the artificial neural network architecture that maximizes the rate of valid and invalid trajectories recognition.

  5. A compact time reversal emitter-receiver based on a leaky random cavity

    PubMed Central

    Luong, Trung-Dung; Hies, Thomas; Ohl, Claus-Dieter

    2016-01-01

    Time reversal acoustics (TRA) has gained widespread applications for communication and measurements. In general, a scattering medium in combination with multiple transducers is needed to achieve a sufficiently large acoustical aperture. In this paper, we report an implementation for a cost-effective and compact time reversal emitter-receiver driven by a single piezoelectric element. It is based on a leaky cavity with random 3-dimensional printed surfaces. The random surfaces greatly increase the spatio-temporal focusing quality as compared to flat surfaces and allow the focus of an acoustic beam to be steered over an angle of 41°. We also demonstrate its potential use as a scanner by embedding a receiver to detect an object from its backscatter without moving the TRA emitter. PMID:27811957

  6. Statistical analysis of trypanosomes' motility

    NASA Astrophysics Data System (ADS)

    Zaburdaev, Vasily; Uppaluri, Sravanti; Pfohl, Thomas; Engstler, Markus; Stark, Holger; Friedrich, Rudolf

    2010-03-01

    Trypanosome is a parasite causing the sleeping sickness. The way it moves in the blood stream and penetrates various obstacles is the area of active research. Our goal was to investigate a free trypanosomes' motion in the planar geometry. Our analysis of trypanosomes' trajectories reveals that there are two correlation times - one is associated with a fast motion of its body and the second one with a slower rotational diffusion of the trypanosome as a point object. We propose a system of Langevin equations to model such motion. One of its peculiarities is the presence of multiplicative noise predicting higher level of noise for higher velocity of the trypanosome. Theoretical and numerical results give a comprehensive description of the experimental data such as the mean squared displacement, velocity distribution and auto-correlation function.

  7. Amodal completion of moving objects by pigeons.

    PubMed

    Nagasaka, Yasuo; Wasserman, Edward A

    2008-01-01

    In a series of four experiments, we explored whether pigeons complete partially occluded moving shapes. Four pigeons were trained to discriminate between a complete moving shape and an incomplete moving shape in a two-alternative forced-choice task. In testing, the birds were presented with a partially occluded moving shape. In experiment 1, none of the pigeons appeared to complete the testing stimulus; instead, they appeared to perceive the testing stimulus as incomplete fragments. However, in experiments 2, 3, and 4, three of the birds appeared to complete the partially occluded moving shapes. These rare positive results suggest that motion may facilitate amodal completion by pigeons, perhaps by enhancing the figure - ground segregation process.

  8. A catalog of slow-moving objects extracted from the Sloan Digital Sky Survey: Compilation and applications

    NASA Astrophysics Data System (ADS)

    Puckett, Andrew W.

    2007-08-01

    I have compiled the Slow-Moving Object Catalog of Known minor planets and comets ("the SMOCK") by comparing the predicted positions of known bodies with those of sources detected by the Sloan Digital Sky Survey (SDSS) that lack positional counterparts at other survey epochs. For the ~50% of the SDSS footprint that has been imaged only once, I have used the Astrophysical Research Consortium's 3.5-meter telescope to obtain reference images for confirmation of Solar System membership. The SMOCK search effort includes all known objects with orbital semimajor axes a > 4.7 AU, as well as a comparison sample of inherently bright Main Belt asteroids. In fact, objects of all proper motions are included, resulting in substantial overlap with the SDSS Moving Object Catalog (MOC) and providing an important check on the inclusion criteria of both catalogs. The MOC does not contain any correctly-identified known objects with a > 12 AU, and also excludes a number of detections of Main Belt and Trojan asteroids that happen to be moving slowly as they enter or leave retrograde motion. The SMOCK catalog is a publicly-available product of this investigation. Having created this new database, I demonstrate some of its applications. The broad dispersion of color indices for transneptunian objects (TNOs) and Centaurs is confirmed, and their tight correlation in ( g - r ) vs ( r - i ) is explored. Repeat observations for more than 30 of these objects allow me to reject the collisional resurfacing scenario as the primary explanation for this broad variety of colors. Trojans with large orbital inclinations are found to have systematically redder colors than their low-inclination counterparts, but an excess of reddish low-inclination objects at L5 is identified. Next, I confirm that non-Plutino TNOs are redder with increasing perihelion distance, and that this effect is even more pronounced among the Classical TNOs. Finally, I take advantage of the byproducts of my search technique and attempt to recover objects with poorly-known orbits. I have drastically improved the current and future ephemeris uncertainties of 3 Trojan asteroids, and have increased by 20%-450% the observed arcs of 10 additional bodies.

  9. Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions

    NASA Astrophysics Data System (ADS)

    Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.

    2005-03-01

    The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.

  10. A novel snapshot polarimetric imager

    NASA Astrophysics Data System (ADS)

    Wong, Gerald; McMaster, Ciaran; Struthers, Robert; Gorman, Alistair; Sinclair, Peter; Lamb, Robert; Harvey, Andrew R.

    2012-10-01

    Polarimetric imaging (PI) is of increasing importance in determining additional scene information beyond that of conventional images. For very long-range surveillance, image quality is degraded due to turbulence. Furthermore, the high magnification required to create images with sufficient spatial resolution suitable for object recognition and identification require long focal length optical systems. These are incompatible with the size and weight restrictions for aircraft. Techniques which allow detection and recognition of an object at the single pixel level are therefore likely to provide advance warning of approaching threats or long-range object cueing. PI is a technique that has the potential to detect object signatures at the pixel level. Early attempts to develop PI used rotating polarisers (and spectral filters) which recorded sequential polarized images from which the complete Stokes matrix could be derived. This approach has built-in latency between frames and requires accurate registration of consecutive frames to analyze real-time video of moving objects. Alternatively, multiple optical systems and cameras have been demonstrated to remove latency, but this approach increases cost and bulk of the imaging system. In our investigation we present a simplified imaging system that divides an image into two orthogonal polarimetric components which are then simultaneously projected onto a single detector array. Thus polarimetric data is recorded without latency on a single snapshot. We further show that, for pixel-level objects, the data derived from only two orthogonal states (H and V) is sufficient to increase the probability of detection whilst reducing false alarms compared to conventional unpolarised imaging.

  11. Statistical Learning Is Constrained to Less Abstract Patterns in Complex Sensory Input (but not the Least)

    PubMed Central

    Emberson, Lauren L.; Rubinstein, Dani

    2016-01-01

    The influence of statistical information on behavior (either through learning or adaptation) is quickly becoming foundational to many domains of cognitive psychology and cognitive neuroscience, from language comprehension to visual development. We investigate a central problem impacting these diverse fields: when encountering input with rich statistical information, are there any constraints on learning? This paper examines learning outcomes when adult learners are given statistical information across multiple levels of abstraction simultaneously: from abstract, semantic categories of everyday objects to individual viewpoints on these objects. After revealing statistical learning of abstract, semantic categories with scrambled individual exemplars (Exp. 1), participants viewed pictures where the categories as well as the individual objects predicted picture order (e.g., bird1—dog1, bird2—dog2). Our findings suggest that participants preferentially encode the relationships between the individual objects, even in the presence of statistical regularities linking semantic categories (Exps. 2 and 3). In a final experiment we investigate whether learners are biased towards learning object-level regularities or simply construct the most detailed model given the data (and therefore best able to predict the specifics of the upcoming stimulus) by investigating whether participants preferentially learn from the statistical regularities linking individual snapshots of objects or the relationship between the objects themselves (e.g., bird_picture1— dog_picture1, bird_picture2—dog_picture2). We find that participants fail to learn the relationships between individual snapshots, suggesting a bias towards object-level statistical regularities as opposed to merely constructing the most complete model of the input. This work moves beyond the previous existence proofs that statistical learning is possible at both very high and very low levels of abstraction (categories vs. individual objects) and suggests that, at least with the current categories and type of learner, there are biases to pick up on statistical regularities between individual objects even when robust statistical information is present at other levels of abstraction. These findings speak directly to emerging theories about how systems supporting statistical learning and prediction operate in our structure-rich environments. Moreover, the theoretical implications of the current work across multiple domains of study is already clear: statistical learning cannot be assumed to be unconstrained even if statistical learning has previously been established at a given level of abstraction when that information is presented in isolation. PMID:27139779

  12. Acceptability and satisfaction of project MOVE: A pragmatic feasibility trial aimed at increasing physical activity in female breast cancer survivors

    PubMed Central

    Pullen, Tanya; Bottorff, Joan L.; Sabiston, Catherine M.; Campbell, Kristin L.; Ellard, Susan L.; Gotay, Carolyn; Fitzpatrick, Kayla; Caperchione, Cristina M.

    2018-01-01

    Abstract Objective Despite the physical and psychological health benefits associated with physical activity (PA) for breast cancer (BC) survivors, up to 70% of female BC survivors are not meeting minimum recommended PA guidelines. The objective of this study was to evaluate acceptability and satisfaction with Project MOVE, an innovative approach to increase PA among BC survivors through the combination of microgrants and financial incentives. Methods A mixed‐methods design was used. Participants were BC survivors and support individuals with a mean age of 58.5 years. At 6‐month follow‐up, participants completed a program evaluation questionnaire (n = 72) and participated in focus groups (n = 52) to explore their experience with Project MOVE. Results Participants reported that they were satisfied with Project MOVE (86.6%) and that the program was appropriate for BC survivors (96.3%). Four main themes emerged from focus groups: (1) acceptability and satisfaction of Project MOVE, detailing the value of the model in developing tailored group‐base PA programs; (2) the importance of Project MOVE leaders, highlighting the value of a leader that was organized and a good communicator; (3) breaking down barriers with Project MOVE, describing how the program helped to address common BC related barriers; and (4) motivation to MOVE, outlining how the microgrants enabled survivors to be active, while the financial incentive motivated them to increase and maintain their PA. Conclusion The findings provide support for the acceptability of Project MOVE as a strategy for increasing PA among BC survivors. PMID:29409128

  13. Orientation Control Method and System for Object in Motion

    NASA Technical Reports Server (NTRS)

    Whorton, Mark Stephen (Inventor); Redmon, Jr., John W. (Inventor); Cox, Mark D. (Inventor)

    2012-01-01

    An object in motion has a force applied thereto at a point of application. By moving the point of application such that the distance between the object's center-of-mass and the point of application is changed, the object's orientation can be changed/adjusted.

  14. Lateralized effects of categorical and coordinate spatial processing of component parts on the recognition of 3D non-nameable objects.

    PubMed

    Saneyoshi, Ayako; Michimata, Chikashi

    2009-12-01

    Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to a different position on geon A. The Categorical task consisted of the original and the categorically transformed objects. The Coordinate task consisted of the original and the coordinately transformed objects. The original object was presented to the central visual field, followed by a comparison object presented to the right or left visual half-fields (RVF and LVF). The results showed an RVF advantage for the Categorical task and an LVF advantage for the Coordinate task. The possibility that categorical and coordinate spatial processing subsystems would be basic computational elements for between- and within-category object recognition was discussed.

  15. Object permanence and working memory in cats (Felis catus).

    PubMed

    Goulet, S; Doré, F Y; Rousseau, R

    1994-10-01

    Cats (Felis catus) find an object when it is visibly moved behind a succession of screens. However, when the object is moved behind a container and is invisibly transferred from the container to the back of a screen, cats try to find the object at or near the container rather than at the true hiding place. Four experiments were conducted to study search behavior and working memory in visible and invisible displacement tests of object permanence. Experiment 1 compared performance in single and in double visible displacement trials. Experiment 2 analyzed search behavior in invisible displacement tests and in analogs using a transparent container. Experiments 3 and 4 tested predictions made from Experiment 1 and 2 in a new situation of object permanence. Results showed that only the position changes that cats have directly perceived are encoded and activated in working memory, because they are unable to represent or infer invisible movements.

  16. Single-sensor system for spatially resolved, continuous, and multiparametric optical mapping of cardiac tissue

    PubMed Central

    Lee, Peter; Bollensdorff, Christian; Quinn, T. Alexander; Wuskell, Joseph P.; Loew, Leslie M.; Kohl, Peter

    2011-01-01

    Background Simultaneous optical mapping of multiple electrophysiologically relevant parameters in living myocardium is desirable for integrative exploration of mechanisms underlying heart rhythm generation under normal and pathophysiologic conditions. Current multiparametric methods are technically challenging, usually involving multiple sensors and moving parts, which contributes to high logistic and economic thresholds that prevent easy application of the technique. Objective The purpose of this study was to develop a simple, affordable, and effective method for spatially resolved, continuous, simultaneous, and multiparametric optical mapping of the heart, using a single camera. Methods We present a new method to simultaneously monitor multiple parameters using inexpensive off-the-shelf electronic components and no moving parts. The system comprises a single camera, commercially available optical filters, and light-emitting diodes (LEDs), integrated via microcontroller-based electronics for frame-accurate illumination of the tissue. For proof of principle, we illustrate measurement of four parameters, suitable for ratiometric mapping of membrane potential (di-4-ANBDQPQ) and intracellular free calcium (fura-2), in an isolated Langendorff-perfused rat heart during sinus rhythm and ectopy, induced by local electrical or mechanical stimulation. Results The pilot application demonstrates suitability of this imaging approach for heart rhythm research in the isolated heart. In addition, locally induced excitation, whether stimulated electrically or mechanically, gives rise to similar ventricular propagation patterns. Conclusion Combining an affordable camera with suitable optical filters and microprocessor-controlled LEDs, single-sensor multiparametric optical mapping can be practically implemented in a simple yet powerful configuration and applied to heart rhythm research. The moderate system complexity and component cost is destined to lower the threshold to broader application of functional imaging and to ease implementation of more complex optical mapping approaches, such as multiparametric panoramic imaging. A proof-of-principle application confirmed that although electrically and mechanically induced excitation occur by different mechanisms, their electrophysiologic consequences downstream from the point of activation are not dissimilar. PMID:21459161

  17. Combined virtual and real robotic test-bed for single operator control of multiple robots

    NASA Astrophysics Data System (ADS)

    Lee, Sam Y.-S.; Hunt, Shawn; Cao, Alex; Pandya, Abhilash

    2010-04-01

    Teams of heterogeneous robots with different dynamics or capabilities could perform a variety of tasks such as multipoint surveillance, cooperative transport and explorations in hazardous environments. In this study, we work with heterogeneous robots of semi-autonomous ground and aerial robots for contaminant localization. We developed a human interface system which linked every real robot to its virtual counterpart. A novel virtual interface has been integrated with Augmented Reality that can monitor the position and sensory information from video feed of ground and aerial robots in the 3D virtual environment, and improve user situational awareness. An operator can efficiently control the real multi-robots using the Drag-to-Move method on the virtual multi-robots. This enables an operator to control groups of heterogeneous robots in a collaborative way for allowing more contaminant sources to be pursued simultaneously. The advanced feature of the virtual interface system is guarded teleoperation. This can be used to prevent operators from accidently driving multiple robots into walls and other objects. Moreover, the feature of the image guidance and tracking is able to reduce operator workload.

  18. Multi-static MIMO along track interferometry (ATI)

    NASA Astrophysics Data System (ADS)

    Knight, Chad; Deming, Ross; Gunther, Jake

    2016-05-01

    Along-track interferometry (ATI) has the ability to generate high-quality synthetic aperture radar (SAR) images and concurrently detect and estimate the positions of ground moving target indicators (GMTI) with moderate processing requirements. This paper focuses on several different ATI system configurations, with an emphasis on low-cost configurations employing no active electronic scanned array (AESA). The objective system has two transmit phase centers and four receive phase centers and supports agile adaptive radar behavior. The advantages of multistatic, multiple input multiple output (MIMO) ATI system configurations are explored. The two transmit phase centers can employ a ping-pong configuration to provide the multistatic behavior. For example, they can toggle between an up and down linear frequency modulated (LFM) waveform every other pulse. The four receive apertures are considered in simple linear spatial configurations. Simulated examples are examined to understand the trade space and verify the expected results. Finally, actual results are collected with the Space Dynamics Laboratorys (SDL) FlexSAR system in diverse configurations. The theory, as well as the simulated and actual SAR results, are presented and discussed.

  19. Motor effects from visually induced disorientation in man.

    DOT National Transportation Integrated Search

    1969-11-01

    The problem of disorientation in a moving optical environment was examined. Egocentric disorientation can be experienced by a pilot if the entire visual environment moves relative to his body without a clue of the objective position of the airplane i...

  20. Judder-Induced Edge Flicker at Zero Spatial Contrast

    NASA Technical Reports Server (NTRS)

    Larimer, James; Feng, Christine; Gille, Jennifer; Cheung, Victor

    2004-01-01

    Judder is a motion artifact that degrades the quality of video imagery. Smooth motion appears jerky and can appear to flicker along the leading and trailing edge of the moving object. In a previous paper, we demonstrated that the strength of the edge flicker signal depended upon the brightness of the scene and the contrast of the moving object relative to the background. Reducing the contrast between foreground and background reduced the flicker signal. In this report, we show that the contrast signal required for judder-induced edge flicker is due to temporal contrast and not simply to spatial contrast. Bars made of random dots of the same dot density as the background exhibit edge flicker when moved at sufficient rate.

  1. A mathematical model for computer image tracking.

    PubMed

    Legters, G R; Young, T Y

    1982-06-01

    A mathematical model using an operator formulation for a moving object in a sequence of images is presented. Time-varying translation and rotation operators are derived to describe the motion. A variational estimation algorithm is developed to track the dynamic parameters of the operators. The occlusion problem is alleviated by using a predictive Kalman filter to keep the tracking on course during severe occlusion. The tracking algorithm (variational estimation in conjunction with Kalman filter) is implemented to track moving objects with occasional occlusion in computer-simulated binary images.

  2. A Temporal Same-Object Advantage in the Tunnel Effect: Facilitated Change Detection for Persisting Objects

    ERIC Educational Resources Information Center

    Flombaum, Jonathan I.; Scholl, Brian J.

    2006-01-01

    Meaningful visual experience requires computations that identify objects as the same persisting individuals over time, motion, occlusion, and featural change. This article explores these computations in the tunnel effect: When an object moves behind an occluder, and then an object later emerges following a consistent trajectory, observers…

  3. Multiple spatially localized dynamical states in friction-excited oscillator chains

    NASA Astrophysics Data System (ADS)

    Papangelo, A.; Hoffmann, N.; Grolet, A.; Stender, M.; Ciavarella, M.

    2018-03-01

    Friction-induced vibrations are known to affect many engineering applications. Here, we study a chain of friction-excited oscillators with nearest neighbor elastic coupling. The excitation is provided by a moving belt which moves at a certain velocity vd while friction is modelled with an exponentially decaying friction law. It is shown that in a certain range of driving velocities, multiple stable spatially localized solutions exist whose dynamical behavior (i.e. regular or irregular) depends on the number of oscillators involved in the vibration. The classical non-repeatability of friction-induced vibration problems can be interpreted in light of those multiple stable dynamical states. These states are found within a "snaking-like" bifurcation pattern. Contrary to the classical Anderson localization phenomenon, here the underlying linear system is perfectly homogeneous and localization is solely triggered by the friction nonlinearity.

  4. On applications of chimera grid schemes to store separation

    NASA Technical Reports Server (NTRS)

    Cougherty, F. C.; Benek, J. A.; Steger, J. L.

    1985-01-01

    A finite difference scheme which uses multiple overset meshes to simulate the aerodynamics of aircraft/store interaction and store separation is described. In this chimera, or multiple mesh, scheme, a complex configuration is mapped using a major grid about the main component of the configuration, and minor overset meshes are used to map each additional component such as a store. As a first step in modeling the aerodynamics of store separation, two dimensional inviscid flow calculations were carried out in which one of the minor meshes is allowed to move with respect to the major grid. Solutions of calibrated two dimensional problems indicate that allowing one mesh to move with respect to another does not adversely affect the time accuracy of an unsteady solution. Steady, inviscid three dimensional computations demonstrate the capability to simulate complex configurations, including closely packed multiple bodies.

  5. Robots, systems, and methods for hazard evaluation and visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nielsen, Curtis W.; Bruemmer, David J.; Walton, Miles C.

    A robot includes a hazard sensor, a locomotor, and a system controller. The robot senses a hazard intensity at a location of the robot, moves to a new location in response to the hazard intensity, and autonomously repeats the sensing and moving to determine multiple hazard levels at multiple locations. The robot may also include a communicator to communicate the multiple hazard levels to a remote controller. The remote controller includes a communicator for sending user commands to the robot and receiving the hazard levels from the robot. A graphical user interface displays an environment map of the environment proximatemore » the robot and a scale for indicating a hazard intensity. A hazard indicator corresponds to a robot position in the environment map and graphically indicates the hazard intensity at the robot position relative to the scale.« less

  6. Intelligence-aided multitarget tracking for urban operations - a case study: counter terrorism

    NASA Astrophysics Data System (ADS)

    Sathyan, T.; Bharadwaj, K.; Sinha, A.; Kirubarajan, T.

    2006-05-01

    In this paper, we present a framework for tracking multiple mobile targets in an urban environment based on data from multiple sources of information, and for evaluating the threat these targets pose to assets of interest (AOI). The motivating scenario is one where we have to track many targets, each with different (unknown) destinations and/or intents. The tracking algorithm is aided by information about the urban environment (e.g., road maps, buildings, hideouts), and strategic and intelligence data. The tracking algorithm needs to be dynamic in that it has to handle a time-varying number of targets and the ever-changing urban environment depending on the locations of the moving objects and AOI. Our solution uses the variable structure interacting multiple model (VS-IMM) estimator, which has been shown to be effective in tracking targets based on road map information. Intelligence information is represented as target class information and incorporated through a combined likelihood calculation within the VS-IMM estimator. In addition, we develop a model to calculate the probability that a particular target can attack a given AOI. This model for the calculation of the probability of attack is based on the target kinematic and class information. Simulation results are presented to demonstrate the operation of the proposed framework on a representative scenario.

  7. Improved Scanners for Microscopic Hyperspectral Imaging

    NASA Technical Reports Server (NTRS)

    Mao, Chengye

    2009-01-01

    Improved scanners to be incorporated into hyperspectral microscope-based imaging systems have been invented. Heretofore, in microscopic imaging, including spectral imaging, it has been customary to either move the specimen relative to the optical assembly that includes the microscope or else move the entire assembly relative to the specimen. It becomes extremely difficult to control such scanning when submicron translation increments are required, because the high magnification of the microscope enlarges all movements in the specimen image on the focal plane. To overcome this difficulty, in a system based on this invention, no attempt would be made to move either the specimen or the optical assembly. Instead, an objective lens would be moved within the assembly so as to cause translation of the image at the focal plane: the effect would be equivalent to scanning in the focal plane. The upper part of the figure depicts a generic proposed microscope-based hyperspectral imaging system incorporating the invention. The optical assembly of this system would include an objective lens (normally, a microscope objective lens) and a charge-coupled-device (CCD) camera. The objective lens would be mounted on a servomotor-driven translation stage, which would be capable of moving the lens in precisely controlled increments, relative to the camera, parallel to the focal-plane scan axis. The output of the CCD camera would be digitized and fed to a frame grabber in a computer. The computer would store the frame-grabber output for subsequent viewing and/or processing of images. The computer would contain a position-control interface board, through which it would control the servomotor. There are several versions of the invention. An essential feature common to all versions is that the stationary optical subassembly containing the camera would also contain a spatial window, at the focal plane of the objective lens, that would pass only a selected portion of the image. In one version, the window would be a slit, the CCD would contain a one-dimensional array of pixels, and the objective lens would be moved along an axis perpendicular to the slit to spatially scan the image of the specimen in pushbroom fashion. The image built up by scanning in this case would be an ordinary (non-spectral) image. In another version, the optics of which are depicted in the lower part of the figure, the spatial window would be a slit, the CCD would contain a two-dimensional array of pixels, the slit image would be refocused onto the CCD by a relay-lens pair consisting of a collimating and a focusing lens, and a prism-gratingprism optical spectrometer would be placed between the collimating and focusing lenses. Consequently, the image on the CCD would be spatially resolved along the slit axis and spectrally resolved along the axis perpendicular to the slit. As in the first-mentioned version, the objective lens would be moved along an axis perpendicular to the slit to spatially scan the image of the specimen in pushbroom fashion.

  8. Multiple-block grid adaption for an airplane geometry

    NASA Technical Reports Server (NTRS)

    Abolhassani, Jamshid Samareh; Smith, Robert E.

    1988-01-01

    Grid-adaption methods are developed with the capability of moving grid points in accordance with several variables for a three-dimensional multiple-block grid system. These methods are algebraic, and they are implemented for the computation of high-speed flow over an airplane configuration.

  9. Execution of saccadic eye movements affects speed perception

    PubMed Central

    Goettker, Alexander; Braun, Doris I.; Schütz, Alexander C.; Gegenfurtner, Karl R.

    2018-01-01

    Due to the foveal organization of our visual system we have to constantly move our eyes to gain precise information about our environment. Doing so massively alters the retinal input. This is problematic for the perception of moving objects, because physical motion and retinal motion become decoupled and the brain has to discount the eye movements to recover the speed of moving objects. Two different types of eye movements, pursuit and saccades, are combined for tracking. We investigated how the way we track moving targets can affect the perceived target speed. We found that the execution of corrective saccades during pursuit initiation modifies how fast the target is perceived compared with pure pursuit. When participants executed a forward (catch-up) saccade they perceived the target to be moving faster. When they executed a backward saccade they perceived the target to be moving more slowly. Variations in pursuit velocity without corrective saccades did not affect perceptual judgments. We present a model for these effects, assuming that the eye velocity signal for small corrective saccades gets integrated with the retinal velocity signal during pursuit. In our model, the execution of corrective saccades modulates the integration of these two signals by giving less weight to the retinal information around the time of corrective saccades. PMID:29440494

  10. Evidence-Based Design Features Improve Sleep Quality Among Psychiatric Inpatients.

    PubMed

    Pyrke, Ryan J L; McKinnon, Margaret C; McNeely, Heather E; Ahern, Catherine; Langstaff, Karen L; Bieling, Peter J

    2017-10-01

    The primary aim of the present study was to compare sleep characteristics pre- and post-move into a state-of-the-art mental health facility, which offered private sleeping quarters. Significant evidence points toward sleep disruption among psychiatric inpatients. It is unclear, however, how environmental factors (e.g., dorm-style rooms) impact sleep quality in this population. To assess sleep quality, a novel objective technology, actigraphy, was used before and after a facility move. Subjective daily interviews were also administered, along with the Horne-Ostberg Morningness-Eveningness Questionnaire and the Pittsburgh Sleep Quality Index. Actigraphy revealed significant improvements in objective sleep quality following the facility move. Interestingly, subjective report of sleep quality did not correlate with the objective measures. Circadian sleep type appeared to play a role in influencing subjective attitudes toward sleep quality. Built environment has a significant effect on the sleep quality of psychiatric inpatients. Given well-documented disruptions in sleep quality present among psychiatric patients undergoing hospitalization, design elements like single patient bedrooms are highly desirable.

  11. Improved segmentation of occluded and adjoining vehicles in traffic surveillance videos

    NASA Astrophysics Data System (ADS)

    Juneja, Medha; Grover, Priyanka

    2013-12-01

    Occlusion in image processing refers to concealment of any part of the object or the whole object from view of an observer. Real time videos captured by static cameras on roads often encounter overlapping and hence, occlusion of vehicles. Occlusion in traffic surveillance videos usually occurs when an object which is being tracked is hidden by another object. This makes it difficult for the object detection algorithms to distinguish all the vehicles efficiently. Also morphological operations tend to join the close proximity vehicles resulting in formation of a single bounding box around more than one vehicle. Such problems lead to errors in further video processing, like counting of vehicles in a video. The proposed system brings forward efficient moving object detection and tracking approach to reduce such errors. The paper uses successive frame subtraction technique for detection of moving objects. Further, this paper implements the watershed algorithm to segment the overlapped and adjoining vehicles. The segmentation results have been improved by the use of noise and morphological operations.

  12. Tracking Object Existence From an Autonomous Patrol Vehicle

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Scharenbroich, Lucas

    2011-01-01

    An autonomous vehicle patrols a large region, during which an algorithm receives measurements of detected potential objects within its sensor range. The goal of the algorithm is to track all objects in the region over time. This problem differs from traditional multi-target tracking scenarios because the region of interest is much larger than the sensor range and relies on the movement of the sensor through this region for coverage. The goal is to know whether anything has changed between visits to the same location. In particular, two kinds of alert conditions must be detected: (1) a previously detected object has disappeared and (2) a new object has appeared in a location already checked. For the time an object is within sensor range, the object can be assumed to remain stationary, changing position only between visits. The problem is difficult because the upstream object detection processing is likely to make many errors, resulting in heavy clutter (false positives) and missed detections (false negatives), and because only noisy, bearings-only measurements are available. This work has three main goals: (1) Associate incoming measurements with known objects or mark them as new objects or false positives, as appropriate. For this, a multiple hypothesis tracker was adapted to this scenario. (2) Localize the objects using multiple bearings-only measurements to provide estimates of global position (e.g., latitude and longitude). A nonlinear Kalman filter extension provides these 2D position estimates using the 1D measurements. (3) Calculate the probability that a suspected object truly exists (in the estimated position), and determine whether alert conditions have been triggered (for new objects or disappeared objects). The concept of a probability of existence was created, and a new Bayesian method for updating this probability at each time step was developed. A probabilistic multiple hypothesis approach is chosen because of its superiority in handling the uncertainty arising from errors in sensors and upstream processes. However, traditional target tracking methods typically assume a stationary detection volume of interest, whereas in this case, one must make adjustments for being able to see only a small portion of the region of interest and understand when an alert situation has occurred. To track object existence inside and outside the vehicle's sensor range, a probability of existence was defined for each hypothesized object, and this value was updated at every time step in a Bayesian manner based on expected characteristics of the sensor and object and whether that object has been detected in the most recent time step. Then, this value feeds into a sequential probability ratio test (SPRT) to determine the status of the object (suspected, confirmed, or deleted). Alerts are sent upon selected status transitions. Additionally, in order to track objects that move in and out of sensor range and update the probability of existence appropriately a variable probability detection has been defined and the hypothesis probability equations have been re-derived to accommodate this change. Unsupervised object tracking is a pervasive issue in automated perception systems. This work could apply to any mobile platform (ground vehicle, sea vessel, air vehicle, or orbiter) that intermittently revisits regions of interest and needs to determine whether anything interesting has changed.

  13. Schlieren System and method for moving objects

    NASA Technical Reports Server (NTRS)

    Weinstein, Leonard M. (Inventor)

    1995-01-01

    A system and method are provided for recording density changes in a flow field surrounding a moving object. A mask having an aperture for regulating the passage of images is placed in front of an image recording medium. An optical system is placed in front of the mask. A transition having a light field-of-view and a dark field-of-view is located beyond the test object. The optical system focuses an image of the transition at the mask such that the aperture causes a band of light to be defined on the image recording medium. The optical system further focuses an image of the object through the aperture of the mask so that the image of the object appears on the image recording medium. Relative motion is minimized between the mask and the transition. Relative motion is also minimized between the image recording medium and the image of the object. In this way, the image of the object and density changes in a flow field surrounding the object are recorded on the image recording medium when the object crosses the transition in front of the optical system.

  14. Vection: the contributions of absolute and relative visual motion.

    PubMed

    Howard, I P; Howard, A

    1994-01-01

    Inspection of a visual scene rotating about the vertical body axis induces a compelling sense of self rotation, or circular vection. Circular vection is suppressed by stationary objects seen beyond the moving display but not by stationary objects in the foreground. We hypothesised that stationary objects in the foreground facilitate vection because they introduce a relative-motion signal into what would otherwise be an absolute-motion signal. Vection latency and magnitude were measured with a full-field moving display and with stationary objects of various sizes and at various positions in the visual field. The results confirmed the hypothesis. Vection latency was longer when there were no stationary objects in view than when stationary objects were in view. The effect of stationary objects was particularly evident at low stimulus velocities. At low velocities a small stationary point significantly increased vection magnitude in spite of the fact that, at higher stimulus velocities and with other stationary objects in view, fixation on a stationary point, if anything, reduced vection. Changing the position of the stationary objects in the field of view did not affect vection latencies or magnitudes.

  15. Virtual expansion of the technical vision system for smart vehicles based on multi-agent cooperation model

    NASA Astrophysics Data System (ADS)

    Krapukhina, Nina; Senchenko, Roman; Kamenov, Nikolay

    2017-12-01

    Road safety and driving in dense traffic flows poses some challenges in receiving information about surrounding moving object, some of which can be in the vehicle's blind spot. This work suggests an approach to virtual monitoring of the objects in a current road scene via a system with a multitude of cooperating smart vehicles exchanging information. It also describes the intellectual agent model, and provides methods and algorithms of identifying and evaluating various characteristics of moving objects in video flow. Authors also suggest ways for integrating the information from the technical vision system into the model with further expansion of virtual monitoring for the system's objects. Implementation of this approach can help to expand the virtual field of view for a technical vision system.

  16. Gauge Conditions for Moving Black Holes Without Excision

    NASA Technical Reports Server (NTRS)

    van Meter, James; Baker, John G.; Koppitz, Michael; Dae-IL, Choi

    2006-01-01

    Recent demonstrations of unexcised, puncture black holes traversing freely across computational grids represent a significant advance in numerical relativity. Stable an$ accurate simulations of multiple orbits, and their radiated waves, result. This capability is critically undergirded by a careful choice of gauge. Here we present analytic considerations which suggest certain gauge choices, and numerically demonstrate their efficacy in evolving a single moving puncture.

  17. Let Me Go: The Influences of Crawling Experience and Temperament on the Development of Anger Expression

    ERIC Educational Resources Information Center

    Pemberton Roben, Caroline K.; Bass, Anneliese J.; Moore, Ginger A.; Murray-Kolb, Laura; Tan, Patricia Z.; Gilmore, Rick O.; Buss, Kristin A.; Cole, Pamela M.; Teti, Laureen O.

    2012-01-01

    Infants' emerging ability to move independently by crawling is associated with changes in multiple domains, including an increase in expressions of anger in situations that block infants' goals, but it is unknown whether increased anger is specifically because of experience with being able to move autonomously or simply related to age. To examine…

  18. Creating Reconfigurable Materials Using ``Colonies'' of Oscillating Polymer Gels

    NASA Astrophysics Data System (ADS)

    Deb, Debabrata; Dayal, Pratyush; Kuksenok, Olga; Balazs, Anna

    2013-03-01

    Species ranging from single-cell organisms to social insects can undergo auto-chemotaxis, where the entities move towards a chemo-attractant that they themselves emit. This mode of signaling allows the organisms to form large-scale structures. Using computational modeling, we show that millimeter-sized polymer gels can display similar auto-chemotaxis. In particular, we demonstrate that gels undergoing the self-oscillating Belousov-Zhabotinsky (BZ) reaction not only respond to a chemical signal from the surrounding solution, but also emit this signal and thus, multiple gel pieces can spontaneously self-aggregate. We focus on the collective behavior of ``colonies'' of BZ gels and show that communication between the individual pieces critically depends on all the neighboring gels. We isolate the conditions at which the BZ gels can undergo a type of self-recombining: if a larger gel is cut into distinct pieces that are moved relatively far apart, then their auto-chemotactic behavior drives them to move and autonomously recombine into a structure resembling the original, uncut sample. These findings reveal that the BZ gels can be used as autonomously moving building blocks to construct multiple structures and thus, provide a new route for creating dynamically reconfigurable materials.

  19. Feature-aided multiple target tracking in the image plane

    NASA Astrophysics Data System (ADS)

    Brown, Andrew P.; Sullivan, Kevin J.; Miller, David J.

    2006-05-01

    Vast quantities of EO and IR data are collected on airborne platforms (manned and unmanned) and terrestrial platforms (including fixed installations, e.g., at street intersections), and can be exploited to aid in the global war on terrorism. However, intelligent preprocessing is required to enable operator efficiency and to provide commanders with actionable target information. To this end, we have developed an image plane tracker which automatically detects and tracks multiple targets in image sequences using both motion and feature information. The effects of platform and camera motion are compensated via image registration, and a novel change detection algorithm is applied for accurate moving target detection. The contiguous pixel blob on each moving target is segmented for use in target feature extraction and model learning. Feature-based target location measurements are used for tracking through move-stop-move maneuvers, close target spacing, and occlusion. Effective clutter suppression is achieved using joint probabilistic data association (JPDA), and confirmed target tracks are indicated for further processing or operator review. In this paper we describe the algorithms implemented in the image plane tracker and present performance results obtained with video clips from the DARPA VIVID program data collection and from a miniature unmanned aerial vehicle (UAV) flight.

  20. Looking for rings and things

    NASA Astrophysics Data System (ADS)

    Kenworthy, Matthew

    2017-04-01

    It's not often that an astronomical object gets its own dedicated observatory, but as the planet Beta Pictoris b moves in front of its host star, its every move will be watched by bRing, eager to discover more about the planet's Hill sphere, explains Matthew Kenworthy.

  1. Discovery of two new satellites of Pluto.

    PubMed

    Weaver, H A; Stern, S A; Mutchler, M J; Steffl, A J; Buie, M W; Merline, W J; Spencer, J R; Young, E F; Young, L A

    2006-02-23

    Pluto's first known satellite, Charon, was discovered in 1978. It has a diameter (approximately 1,200 km) about half that of Pluto, which makes it larger, relative to its primary, than any other moon in the Solar System. Previous searches for other satellites around Pluto have been unsuccessful, but they were not sensitive to objects less, similar150 km in diameter and there are no fundamental reasons why Pluto should not have more satellites. Here we report the discovery of two additional moons around Pluto, provisionally designated S/2005 P 1 (hereafter P1) and S/2005 P 2 (hereafter P2), which makes Pluto the first Kuiper belt object known to have multiple satellites. These new satellites are much smaller than Charon, with estimates of P1's diameter ranging from 60 km to 165 km, depending on the surface reflectivity; P2 is about 20 per cent smaller than P1. Although definitive orbits cannot be derived, both new satellites appear to be moving in circular orbits in the same orbital plane as Charon, with orbital periods of approximately 38 days (P1) and approximately 25 days (P2).

  2. Point Cloud Analysis for Uav-Borne Laser Scanning with Horizontally and Vertically Oriented Line Scanners - Concept and First Results

    NASA Astrophysics Data System (ADS)

    Weinmann, M.; Müller, M. S.; Hillemann, M.; Reydel, N.; Hinz, S.; Jutzi, B.

    2017-08-01

    In this paper, we focus on UAV-borne laser scanning with the objective of densely sampling object surfaces in the local surrounding of the UAV. In this regard, using a line scanner which scans along the vertical direction and perpendicular to the flight direction results in a point cloud with low point density if the UAV moves fast. Using a line scanner which scans along the horizontal direction only delivers data corresponding to the altitude of the UAV and thus a low scene coverage. For these reasons, we present a concept and a system for UAV-borne laser scanning using multiple line scanners. Our system consists of a quadcopter equipped with horizontally and vertically oriented line scanners. We demonstrate the capabilities of our system by presenting first results obtained for a flight within an outdoor scene. Thereby, we use a downsampling of the original point cloud and different neighborhood types to extract fundamental geometric features which in turn can be used for scene interpretation with respect to linear, planar or volumetric structures.

  3. NASA Tech Briefs, May 2005

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Topics covered include: Fastener Starter; Multifunctional Deployment Hinges Rigidified by Ultraviolet; Temperature-Controlled Clamping and Releasing Mechanism; Long-Range Emergency Preemption of Traffic Lights; High-Efficiency Microwave Power Amplifier; Improvements of ModalMax High-Fidelity Piezoelectric Audio Device; Alumina or Semiconductor Ribbon Waveguides at 30 to 1,000 GHz; HEMT Frequency Doubler with Output at 300 GHz; Single-Chip FPGA Azimuth Pre-Filter for SAR; Autonomous Navigation by a Mobile Robot; Software Would Largely Automate Design of Kalman Filter; Predicting Flows of Rarefied Gases; Centralized Planning for Multiple Exploratory Robots; Electronic Router; Piezo-Operated Shutter Mechanism Moves 1.5 cm; Two SMA-Actuated Miniature Mechanisms; Vortobots; Ultrasonic/Sonic Jackhammer; Removing Pathogens Using Nano-Ceramic-Fiber Filters; Satellite-Derived Management Zones; Digital Equivalent Data System for XRF Labeling of Objects; Identifying Objects via Encased X-Ray-Fluorescent Materials - the Bar Code Inside; Vacuum Attachment for XRF Scanner; Simultaneous Conoscopic Holography and Raman Spectroscopy; Adding GaAs Monolayers to InAs Quantum-Dot Lasers on (001) InP; Vibrating Optical Fibers to Make Laser Speckle Disappear; Adaptive Filtering Using Recurrent Neural Networks; and Applying Standard Interfaces to a Process-Control Language.

  4. Distributed Traffic Complexity Management by Preserving Trajectory Flexibility

    NASA Technical Reports Server (NTRS)

    Idris, Husni; Vivona, Robert A.; Garcia-Chico, Jose-Luis; Wing, David J.

    2007-01-01

    In order to handle the expected increase in air traffic volume, the next generation air transportation system is moving towards a distributed control architecture, in which groundbased service providers such as controllers and traffic managers and air-based users such as pilots share responsibility for aircraft trajectory generation and management. This paper presents preliminary research investigating a distributed trajectory-oriented approach to manage traffic complexity, based on preserving trajectory flexibility. The underlying hypotheses are that preserving trajectory flexibility autonomously by aircraft naturally achieves the aggregate objective of avoiding excessive traffic complexity, and that trajectory flexibility is increased by collaboratively minimizing trajectory constraints without jeopardizing the intended air traffic management objectives. This paper presents an analytical framework in which flexibility is defined in terms of robustness and adaptability to disturbances and preliminary metrics are proposed that can be used to preserve trajectory flexibility. The hypothesized impacts are illustrated through analyzing a trajectory solution space in a simple scenario with only speed as a degree of freedom, and in constraint situations involving meeting multiple times of arrival and resolving conflicts.

  5. Dynamical evolution of motion perception.

    PubMed

    Kanai, Ryota; Sheth, Bhavin R; Shimojo, Shinsuke

    2007-03-01

    Motion is defined as a sequence of positional changes over time. However, in perception, spatial position and motion dynamically interact with each other. This reciprocal interaction suggests that the perception of a moving object itself may dynamically evolve following the onset of motion. Here, we show evidence that the percept of a moving object systematically changes over time. In experiments, we introduced a transient gap in the motion sequence or a brief change in some feature (e.g., color or shape) of an otherwise smoothly moving target stimulus. Observers were highly sensitive to the gap or transient change if it occurred soon after motion onset (< or =200 ms), but significantly less so if it occurred later (> or = 300 ms). Our findings suggest that the moving stimulus is initially perceived as a time series of discrete potentially isolatable frames; later failures to perceive change suggests that over time, the stimulus begins to be perceived as a single, indivisible gestalt integrated over space as well as time, which could well be the signature of an emergent stable motion percept.

  6. Residential mobility and the association between physical environment disadvantage and general and mental health.

    PubMed

    Tunstall, H; Pearce, J R; Shortt, N K; Mitchell, R J

    2015-12-01

    Selective migration may influence the association between physical environments and health. This analysis assessed whether residential mobility concentrates people with poor health in neighbourhoods of the UK with disadvantaged physical environments. Data were from the British Household Panel Survey. Moves were over 1 year between adjacent survey waves, pooled over 10 pairs of waves, 1996-2006. Health outcomes were self-reported poor general health and mental health problems. Neighbourhood physical environment was defined using the Multiple Environmental Deprivation Index (MEDIx) for wards. Logistic regression analysis compared risk of poor health in MEDIx categories before and after moves. Analyses were stratified by age groups 18-29, 30-44, 45-59 and 60+ years and adjusted for age, sex, marital status, household type, housing tenure, education and social class. The pooled data contained 122 570 observations. 8.5% moved between survey waves but just 3.0% changed their MEDIx category. In all age groups odds ratios for poor general and mental health were not significantly increased in the most environmentally deprived neighbourhoods following moves. Over a 1-year time period residential moves between environments with different levels of multiple physical deprivation were rare and did not significantly raise rates of poor health in the most deprived areas. © The Author 2014. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. The Role of Visual Working Memory in Attentive Tracking of Unique Objects

    ERIC Educational Resources Information Center

    Makovski, Tal; Jiang, Yuhong V.

    2009-01-01

    When tracking moving objects in space humans usually attend to the objects' spatial locations and update this information over time. To what extent do surface features assist attentive tracking? In this study we asked participants to track identical or uniquely colored objects. Tracking was enhanced when objects were unique in color. The benefit…

  8. Windmill-task as a New Quantitative and Objective Assessment for Mirror Movements in Unilateral Cerebral Palsy: A Pilot Study.

    PubMed

    Zielinski, Ingar Marie; Steenbergen, Bert; Schmidt, Anna; Klingels, Katrijn; Simon Martinez, Cristina; de Water, Pascal; Hoare, Brian

    2018-03-23

    To introduce the Windmill-task, a new objective assessment tool to quantify the presence of mirror movements (MMs) in children with unilateral cerebral palsy (UCP), which are typically assessed with the observation-based Woods and Teuber scale (W&T). Prospective, observational, cohort pilot study. Children's hospital. Prospective cohort of children (N=23) with UCP (age range, 6-15y, mean age, 10.5±2.7y). Not applicable. The concurrent validity of the Windmill-task is assessed, and the sensitivity and specificity for MM detection are compared between both assessments. To assess the concurrent validity, Windmill-task data are compared with W&T data using Spearman rank correlations (ρ) for 2 conditions: affected hand moving vs less affected hand moving. Sensitivity and specificity are compared by measuring the mean percentage of children being assessed inconsistently across both assessments. Outcomes of both assessments correlated significantly (affected hand moving: ρ=.520; P=.005; less affected hand moving: ρ=.488; P=.009). However, many children displayed MMs on the Windmill-task, but not on the W&T (sensitivity: affected hand moving: 27.5%; less affected hand moving: 40.6%). Only 2 children displayed MMs on the W&T, but not on the Windmill-task (specificity: affected hand moving: 2.9%; less affected hand moving: 1.4%). The Windmill-task seems to be a valid tool to assess MMs in children with UCP and has an additional advantage of sensitivity to detect MMs. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  9. A hetero-core fiber optic smart mat sensor for discrimination between a moving human and object on temporal loss peaks

    NASA Astrophysics Data System (ADS)

    Hosoki, Ai; Nishiyama, Michiko; Choi, Yongwoon; Watanabe, Kazuhiro

    2011-05-01

    In this paper, we propose discrimination method between a moving human and object by means of a hetero-core fiber smart mat sensor which induces the optical loss change in time. In addition to several advantages such as flexibility, thin size and resistance to electro-magnetic interference for a fiber optic sensor, a hetero-core fiber optic sensor is sensitive to bending action of the sensor portion and independent of temperature fluctuations. Therefore, the hetero-core fiber thin mat sensor can have a fewer sensing portions than the conventional floor pressure sensors, furthermore, can detect the wide area covering the length of strides. The experimental results for human walking tests showed that the mat sensors were reproducibly working in real-time under limiting locations the foot passed in the mat sensor. Focusing on the temporal peak numbers in the optical loss, human walking and wheeled platform moving action induced the peak numbers in the range of 1 - 3 and 5 - 7, respectively, for the 10 persons including 9 male and 1 female. As a result, we conclude that the hetero-core fiber mat sensor is capable of discriminating between the moving human and object such as a wheeled platform focusing on the peak numbers in the temporal optical loss.

  10. Human observers are biased in judging the angular approach of a projectile.

    PubMed

    Welchman, Andrew E; Tuck, Val L; Harris, Julie M

    2004-01-01

    How do we decide whether an object approaching us will hit us? The optic array provides information sufficient for us to determine the approaching trajectory of a projectile. However, when using binocular information, observers report that trajectories near the mid-sagittal plane are wider than they actually are. Here we extend this work to consider stimuli containing additional depth cues. We measure observers' estimates of trajectory direction first for computer rendered, stereoscopically presented, rich-cue objects, and then for real objects moving in the world. We find that, under both rich cue conditions and with real moving objects, observers show positive bias, overestimating the angle of approach when movement is near the mid-sagittal plane. The findings question whether the visual system, using both binocular and monocular cues to depth, can make explicit estimates of the 3-D location and movement of objects in depth.

  11. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  12. Humanoid Mobile Manipulation Using Controller Refinement

    NASA Technical Reports Server (NTRS)

    Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric

    2006-01-01

    An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.

  13. The Deep Lens Survey : Real--time Optical Transient and Moving Object Detection

    NASA Astrophysics Data System (ADS)

    Becker, Andy; Wittman, David; Stubbs, Chris; Dell'Antonio, Ian; Loomba, Dinesh; Schommer, Robert; Tyson, J. Anthony; Margoniner, Vera; DLS Collaboration

    2001-12-01

    We report on the real-time optical transient program of the Deep Lens Survey (DLS). Meeting the DLS core science weak-lensing objective requires repeated visits to the same part of the sky, 20 visits for 63 sub-fields in 4 filters, on a 4-m telescope. These data are reduced in real-time, and differenced against each other on all available timescales. Our observing strategy is optimized to allow sensitivity to transients on several minute, one day, one month, and one year timescales. The depth of the survey allows us to detect and classify both moving and stationary transients down to ~ 25th magnitude, a relatively unconstrained region of astronomical variability space. All transients and moving objects, including asteroids, Kuiper belt (or trans-Neptunian) objects, variable stars, supernovae, 'unknown' bursts with no apparent host, orphan gamma-ray burst afterglows, as well as airplanes, are posted on the web in real-time for use by the community. We emphasize our sensitivity to detect and respond in real-time to orphan afterglows of gamma-ray bursts, and present one candidate orphan in the field of Abell 1836. See http://dls.bell-labs.com/transients.html.

  14. Geometric Theory of Moving Grid Wavefront Sensor

    DTIC Science & Technology

    1977-06-30

    Identify by block numbot) Adaptive Optics WaVefront Sensor Geometric Optics Analysis Moving Ronchi Grid "ABSTRACT (Continue an revere sdde If nooessaY...ad Identify by block nucber)A geometric optics analysis is made for a wavefront sensor that uses a moving Ronchi grid. It is shown that by simple data... optical systems being considered or being developed -3 for imaging an object through a turbulent atmosphere. Some of these use a wavefront sensor to

  15. Orbital Evolution of Jupiter-family Comets

    NASA Astrophysics Data System (ADS)

    Ipatov, S. I.; Mather, J. C.

    2004-05-01

    The orbital evolution of more than 25,000 Jupiter-family comets (JFCs) under the gravitational influence of planets was studied. After 40 Myr one considered object (with initial orbit close to that of Comet 88P) got aphelion distance Q<3.5 AU, and it moved in orbits with semi-major axis a=2.60-2.61 AU, perihelion distance 1.71.4 AU, Q<2.6 AU, e=0.2-0.3, and i=9-33 deg for 8 Myr (and it had Q<3 AU for 100 Myr). So JFCs can rarely get typical asteroid orbits and move in them for Myrs. In our opinion, it can be possible that Comet 133P (Elst--Pizarro) moving in a typical asteroidal orbit was earlier a JFC and it circulated its orbit also due to non-gravitational forces. JFCs got near-Earth object (NEO) orbits more often than typical asteroidal orbits. A few JFCs got Earth-crossing orbits with a<2 AU and Q<4.2 AU and moved in such orbits for more than 1 Myr (up to tens or even hundreds of Myrs). Three considered former JFCs even got inner-Earth orbits (with Q<0.983 AU) or Aten orbits for Myrs. The probability of a collision of one of such objects, which move for millions of years inside Jupiter's orbit, with a terrestrial planet can be greater than analogous total probability for thousands other objects. Results obtained by the Bulirsch-Stoer method and by a symplectic method were mainly similar (except for probabilities of close encounters with the Sun when they were high). Our results show that the trans-Neptunian belt can provide a significant portion of NEOs, or the number of trans-Neptunian objects migrating inside solar system could be smaller than it was earlier considered, or most of 1-km former trans-Neptunian objects that had got NEO orbits disintegrated into mini-comets and dust during a smaller part of their dynamical lifetimes if these lifetimes are not small. The obtained results show that during the accumulation of the giant planets the total mass of icy bodies delivered to the Earth could be about the mass of water in Earth's oceans. Several our papers on this problem were put in http://arXiv.org/format/astro-ph/ (e.g., 0305519, 0308448). This work was supported by NASA (NAG5-10776) and INTAS (00-240).

  16. Hyperspectral Imager-Tracker

    NASA Technical Reports Server (NTRS)

    Agurok, Llya

    2013-01-01

    The Hyperspectral Imager-Tracker (HIT) is a technique for visualization and tracking of low-contrast, fast-moving objects. The HIT architecture is based on an innovative and only recently developed concept in imaging optics. This innovative architecture will give the Light Prescriptions Innovators (LPI) HIT the possibility of simultaneously collecting the spectral band images (hyperspectral cube), IR images, and to operate with high-light-gathering power and high magnification for multiple fast- moving objects. Adaptive Spectral Filtering algorithms will efficiently increase the contrast of low-contrast scenes. The most hazardous parts of a space mission are the first stage of a launch and the last 10 kilometers of the landing trajectory. In general, a close watch on spacecraft operation is required at distances up to 70 km. Tracking at such distances is usually associated with the use of radar, but its milliradian angular resolution translates to 100- m spatial resolution at 70-km distance. With sufficient power, radar can track a spacecraft as a whole object, but will not provide detail in the case of an accident, particularly for small debris in the onemeter range, which can only be achieved optically. It will be important to track the debris, which could disintegrate further into more debris, all the way to the ground. Such fragmentation could cause ballistic predictions, based on observations using high-resolution but narrow-field optics for only the first few seconds of the event, to be inaccurate. No optical imager architecture exists to satisfy NASA requirements. The HIT was developed for space vehicle tracking, in-flight inspection, and in the case of an accident, a detailed recording of the event. The system is a combination of five subsystems: (1) a roving fovea telescope with a wide 30 field of regard; (2) narrow, high-resolution fovea field optics; (3) a Coude optics system for telescope output beam stabilization; (4) a hyperspectral-mutispectral imaging assembly; and (5) image analysis software with effective adaptive spectral filtering algorithm for real-time contrast enhancement.

  17. Moving spray-plate center-pivot sprinkler rating index for assessing runoff potential

    USDA-ARS?s Scientific Manuscript database

    Numerous moving spray-plate center-pivot sprinklers are commercially available providing a range of drop size distributions and wetted diameters. A means to quantitatively compare sprinkler choices in regards to maximizing infiltration and minimizing runoff is currently lacking. The objective of thi...

  18. Effects of Functional Mobility Skills Training for Adults with Severe Multiple Disabilities

    ERIC Educational Resources Information Center

    Whinnery, Stacie B.; Whinnery, Keith W.

    2011-01-01

    This study investigated the effects of a functional mobility program on the functional standing and walking skills of five adults with developmental disabilities. The Mobility Opportunities Via Education (MOVE) Curriculum was implemented using a multiple-baseline across subjects design. Repeated measures were taken during baseline, intervention…

  19. Using Multiple Grids To Compute Flows

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    1991-01-01

    Paper discusses decomposition of global grids into multiple patched and/or overlaid local grids in computations of fluid flow. Such "domain decomposition" particularly useful in computation of flows about complicated bodies moving relative to each other; for example, flows associated with rotors and stators in turbomachinery and rotors and fuselages in helicopters.

  20. Laying the Foundation for Multiplicative Thinking in Year 2

    ERIC Educational Resources Information Center

    Watson, Kelly

    2016-01-01

    In order for students to move from using concrete materials to using mental strategies and from additive to multiplicative thinking, the use of arrays and visualisation is pivotal. This article describes a lesson in which students are taken through a Concrete-Representational-Abstract (CRA) approach that involves noticing structure, using…

  1. Asynchronous Visualization of Spatiotemporal Information for Multiple Moving Targets

    ERIC Educational Resources Information Center

    Wang, Huadong

    2013-01-01

    In the modern information age, the quantity and complexity of spatiotemporal data is increasing both rapidly and continuously. Sensor systems with multiple feeds that gather multidimensional spatiotemporal data will result in information clusters and overload, as well as a high cognitive load for users of these systems. To meet future…

  2. Multistakeholder Perspectives on the Transition to a Graduate-Level Athletic Training Educational Model

    PubMed Central

    Mazerolle, Stephanie M.; Bowman, Thomas G.; Pitney, William A.

    2015-01-01

    Context  The decision has been made to move away from the traditional bachelor's degree professional program to a master's degree professional program. Little is known about the perceptions about this transition from those involved with education. Objective  To examine multiple stakeholders' perspectives within athletic training education on the effect that a change to graduate-level education could have on the profession and the educational and professional development of the athletic trainer. Design  Qualitative study. Setting  Web-based survey. Patients or Other Participants  A total of 18 athletic training students (6 men, 12 women; age = 24 ± 5 years), 17 athletic training faculty (6 men, 9 women, 2 unspecified; 7 program directors, 5 faculty members, 3 clinical coordinators, 2 unidentified; age = 45 ± 8 years), and 15 preceptors (7 men, 7 women, 1 unspecified; age = 34 ± 7 years) completed the study. Data Collection and Analysis  Participants completed a structured Web-based questionnaire. Each cohort responded to questions matching their roles within an athletic training program. Data were analyzed following a general inductive process. Member checks, multiple-analyst triangulation, and peer review established credibility. Results  Thirty-one (62%) participants supported the transition, 14 (28%) were opposed, and 5 (10%) were neutral or undecided. Advantages of and support for transitioning and disadvantages of and against transitioning emerged. The first higher-order theme, advantages, revealed 4 benefits: (1) alignment of athletic training with other health care professions, (2) advanced coursework and curriculum delivery, (3) improved student and professional retention, and (4) student maturity. The second higher-order theme, disadvantages, was defined by 3 factors: (1) limited time for autonomous practice, (2) financial concerns, and (3) lack of evidence for the transition. Conclusions  Athletic training students, faculty, and preceptors demonstrated moderate support for a transition to the graduate-level model. Factors supporting the move were comparable with those detailed in a recent document on professional education in athletic training presented to the National Athletic Trainers' Association Board of Directors. The concerns about and reasons against a move have been discussed by those in the profession. PMID:26287491

  3. Dynamical friction for supersonic motion in a homogeneous gaseous medium

    NASA Astrophysics Data System (ADS)

    Thun, Daniel; Kuiper, Rolf; Schmidt, Franziska; Kley, Wilhelm

    2016-05-01

    Context. The supersonic motion of gravitating objects through a gaseous ambient medium constitutes a classical problem in theoretical astrophysics. Its application covers a broad range of objects and scales from planetesimals, planets, and all kind of stars up to galaxies and black holes. In particular, the dynamical friction caused by the wake that forms behind the object plays an important role for the dynamics of the system. To calculate the dynamical friction for a particular system, standard formulae based on linear theory are often used. Aims: It is our goal to check the general validity of these formulae and provide suitable expressions for the dynamical friction acting on the moving object, based on the basic physical parameters of the problem: first, the mass, radius, and velocity of the perturber; second, the gas mass density, soundspeed, and adiabatic index of the gaseous medium; and finally, the size of the forming wake. Methods: We perform dedicated sequences of high-resolution numerical studies of rigid bodies moving supersonically through a homogeneous ambient medium and calculate the total drag acting on the object, which is the sum of gravitational and hydrodynamical drag. We study cases without gravity with purely hydrodynamical drag, as well as gravitating objects. In various numerical experiments, we determine the drag force acting on the moving body and its dependence on the basic physical parameters of the problem, as given above. From the final equilibrium state of the simulations, for gravitating objects we compute the dynamical friction by direct numerical integration of the gravitational pull acting on the embedded object. Results: The numerical experiments confirm the known scaling laws for the dependence of the dynamical friction on the basic physical parameters as derived in earlier semi-analytical studies. As a new important result we find that the shock's stand-off distance is revealed as the minimum spatial interaction scale of dynamical friction. Below this radius, the gas settles into a hydrostatic state, which - owing to its spherical symmetry - causes no net gravitational pull onto the moving body. Finally, we derive an analytic estimate for the stand-off distance that can easily be used when calculating the dynamical friction force.

  4. Progressive Elaboration and Cross-Validation of a Latent Class Typology of Adolescent Alcohol Involvement in a National Sample

    PubMed Central

    Donovan, John E.; Chung, Tammy

    2015-01-01

    Objective: Most studies of adolescent drinking focus on single alcohol use behaviors (e.g., high-volume drinking, drunkenness) and ignore the patterning of adolescents’ involvement across multiple alcohol behaviors. The present latent class analyses (LCAs) examined a procedure for empirically determining multiple cut points on the alcohol use behaviors in order to establish a typology of adolescent alcohol involvement. Method: LCA was carried out on six alcohol use behavior indicators collected from 6,504 7th through 12th graders who participated in Wave I of the National Longitudinal Study of Adolescent Health (AddHealth). To move beyond dichotomous indicators, a “progressive elaboration” strategy was used, starting with six dichotomous indicators and then evaluating a series of models testing additional cut points on the ordinal indicators at progressively higher points for one indicator at a time. Analyses were performed on one random half-sample, and confirmatory LCAs were performed on the second random half-sample and in the Wave II data. Results: The final model consisted of four latent classes (never or non–current drinkers, low-intake drinkers, non–problem drinkers, and problem drinkers). Confirmatory LCAs in the second random half-sample from Wave I and in Wave II support this four-class solution. The means on the four latent classes were also generally ordered on an array of measures reflecting psychosocial risk for problem behavior. Conclusions: These analyses suggest that there may be four different classes or types of alcohol involvement among adolescents, and, more importantly, they illustrate the utility of the progressive elaboration strategy for moving beyond dichotomous indicators in latent class models. PMID:25978828

  5. A serendipitous all sky survey for bright objects in the outer solar system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, M. E.; Drake, A. J.; Djorgovski, S. G.

    2015-02-01

    We use seven year's worth of observations from the Catalina Sky Survey and the Siding Spring Survey covering most of the northern and southern hemisphere at galactic latitudes higher than 20° to search for serendipitously imaged moving objects in the outer solar system. These slowly moving objects would appear as stationary transients in these fast cadence asteroids surveys, so we develop methods to discover objects in the outer solar system using individual observations spaced by months, rather than spaced by hours, as is typically done. While we independently discover eight known bright objects in the outer solar system, the faintestmore » having V=19.8±0.1, no new objects are discovered. We find that the survey is nearly 100% efficient at detecting objects beyond 25 AU for V≲19.1 (V≲18.6 in the southern hemisphere) and that the probability that there is one or more remaining outer solar system object of this brightness left to be discovered in the unsurveyed regions of the galactic plane is approximately 32%.« less

  6. Sensorimotor Synchronization with Different Metrical Levels of Point-Light Dance Movements

    PubMed Central

    Su, Yi-Huang

    2016-01-01

    Rhythm perception and synchronization have been extensively investigated in the auditory domain, as they underlie means of human communication such as music and speech. Although recent studies suggest comparable mechanisms for synchronizing with periodically moving visual objects, the extent to which it applies to ecologically relevant information, such as the rhythm of complex biological motion, remains unknown. The present study addressed this issue by linking rhythm of music and dance in the framework of action-perception coupling. As a previous study showed that observers perceived multiple metrical periodicities in dance movements that embodied this structure, the present study examined whether sensorimotor synchronization (SMS) to dance movements resembles what is known of auditory SMS. Participants watched a point-light figure performing two basic steps of Swing dance cyclically, in which the trunk bounced at every beat and the limbs moved at every second beat, forming two metrical periodicities. Participants tapped synchronously to the bounce of the trunk with or without the limbs moving in the stimuli (Experiment 1), or tapped synchronously to the leg movements with or without the trunk bouncing simultaneously (Experiment 2). Results showed that, while synchronization with the bounce (lower-level pulse) was not influenced by the presence or absence of limb movements (metrical accent), synchronization with the legs (beat) was improved by the presence of the bounce (metrical subdivision) across different movement types. The latter finding parallels the “subdivision benefit” often demonstrated in auditory tasks, suggesting common sensorimotor mechanisms for visual rhythms in dance and auditory rhythms in music. PMID:27199709

  7. A fast numerical method for ideal fluid flow in domains with multiple stirrers

    NASA Astrophysics Data System (ADS)

    Nasser, Mohamed M. S.; Green, Christopher C.

    2018-03-01

    A collection of arbitrarily-shaped solid objects, each moving at a constant speed, can be used to mix or stir ideal fluid, and can give rise to interesting flow patterns. Assuming these systems of fluid stirrers are two-dimensional, the mathematical problem of resolving the flow field—given a particular distribution of any finite number of stirrers of specified shape and speed—can be formulated as a Riemann-Hilbert (R-H) problem. We show that this R-H problem can be solved numerically using a fast and accurate algorithm for any finite number of stirrers based around a boundary integral equation with the generalized Neumann kernel. Various systems of fluid stirrers are considered, and our numerical scheme is shown to handle highly multiply connected domains (i.e. systems of many fluid stirrers) with minimal computational expense.

  8. Portable hand hold device

    NASA Technical Reports Server (NTRS)

    Redmon, Jr., John W. (Inventor); McQueen, Donald H. (Inventor); Sanders, Fred G. (Inventor)

    1990-01-01

    A hand hold device (A) includes a housing (10) having a hand hold (14) and clamping brackets (32,34) for grasping and handling an object. A drive includes drive lever (23), spur gear (22), and rack gears (24,26) carried on rods (24a, 26a) for moving the clamping brackets. A lock includes ratchet gear (40) and pawl (42) biased between lock and unlock positions by a cantilever spring (46,48) and moved by handle (54). Compliant grip pads (32b, 34b) provide compliance to lock, unlock, and hold an object between the clamp brackets.

  9. Development of a Standard to Objectively Define and Measure the End-to-End Quality of Teleconferencing/Videophone Systems

    DTIC Science & Technology

    1991-02-01

    lines; and edge busyness , wherein the position of the edge appears to be moving when there is a rapid signal change . E - 3 APPENDIX Fl T|QI. 5/88-070...Some of the most important new and changed factors are as follows: o Motion must be introduced as a most important feature. o Motion artifacts must be...nominal audio level (measured to ground). edge busyness : The deterioration of motion video such that the outlines of moving objects are displayed with

  10. Parallel Flux Tensor Analysis for Efficient Moving Object Detection

    DTIC Science & Technology

    2011-07-01

    computing as well as parallelization to enable real time performance in analyzing complex video [3, 4 ]. There are a number of challenging computer vision... 4 . TITLE AND SUBTITLE Parallel Flux Tensor Analysis for Efficient Moving Object Detection 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...We use the trace of the flux tensor matrix, referred to as Tr JF , that is defined below, Tr JF = ∫ Ω W (x− y)(I2xt(y) + I2yt(y) + I2tt(y))dy ( 4 ) as

  11. Visual control of prey-capture flight in dragonflies.

    PubMed

    Olberg, Robert M

    2012-04-01

    Interacting with a moving object poses a computational problem for an animal's nervous system. This problem has been elegantly solved by the dragonfly, a formidable visual predator on flying insects. The dragonfly computes an interception flight trajectory and steers to maintain it during its prey-pursuit flight. This review summarizes current knowledge about pursuit behavior and neurons thought to control interception in the dragonfly. When understood, this system has the potential for explaining how a small group of neurons can control complex interactions with moving objects. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Visual Search for Motion-Form Conjunctions: Selective Attention to Movement Direction.

    PubMed

    Von Mühlenen, Adrian; Müller, Hermann J

    1999-07-01

    In 2 experiments requiring visual search for conjunctions of motion and form, the authors reinvestigated whether motion-based filtering (e.g., P. McLeod, J. Driver, Z. Dienes, & J. Crisp, 1991) is direction selective and whether cuing of the target direction promotes efficient search performance. In both experiments, the authors varied the number of movement directions in the display and the predictability of the target direction. Search was less efficient when items moved in multiple (2, 3, and 4) directions as compared with just 1 direction. Furthermore, precuing of the target direction facilitated the search, even with "wrap-around" displays, relatively more when items moved in multiple directions. The authors proposed 2 principles to explain that pattern of effects: (a) interference on direction computation between items moving in different directions (e.g., N. Qian & R. A. Andersen, 1994) and (b) selective direction tuning of motion detectors involving a receptive-field contraction (cf. J. Moran & R. Desimone, 1985; S. Treue & J. H. R. Maunsell, 1996).

  13. New Evidence for the Dynamical Decay of a Multiple System in the Orion Kleinmann–Low Nebula

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luhman, K. L.; Robberto, M.; Gabellini, M. Giulia Ubeira

    We have measured astrometry for members of the Orion Nebula Cluster with images obtained in 2015 with the Wide Field Camera 3 on board the Hubble Space Telescope . By comparing those data to previous measurements with the Near-Infrared Camera and Multi-Object Spectrometer on Hubble in 1998, we have discovered that a star in the Kleinmann–Low Nebula, source x from Lonsdale et al., is moving with an unusually high proper motion of 29 mas yr{sup −1}, which corresponds to 55 km s{sup −1} at the distance of Orion. Previous radio observations have found that three other stars in the Kleinmann–Lowmore » Nebula (the Becklin–Neugebauer object and sources I and n) have high proper motions (5–14 mas yr{sup −1}) and were near a single location ∼540 years ago, and thus may have been members of a multiple system that dynamically decayed. The proper motion of source x is consistent with ejection from that same location 540 years ago, which provides strong evidence that the dynamical decay did occur and that the runaway star BN originated in the Kleinmann–Low Nebula rather than the nearby Trapezium cluster. However, our constraint on the motion of source n is significantly smaller than the most recent radio measurement, which indicates that it did not participate in the event that ejected the other three stars.« less

  14. New Evidence for the Dynamical Decay of a Multiple System in the Orion Kleinmann-Low Nebula

    NASA Astrophysics Data System (ADS)

    Luhman, K. L.; Robberto, M.; Tan, J. C.; Andersen, M.; Giulia Ubeira Gabellini, M.; Manara, C. F.; Platais, I.; Ubeda, L.

    2017-03-01

    We have measured astrometry for members of the Orion Nebula Cluster with images obtained in 2015 with the Wide Field Camera 3 on board the Hubble Space Telescope. By comparing those data to previous measurements with the Near-Infrared Camera and Multi-Object Spectrometer on Hubble in 1998, we have discovered that a star in the Kleinmann-Low Nebula, source x from Lonsdale et al., is moving with an unusually high proper motion of 29 mas yr-1, which corresponds to 55 km s-1 at the distance of Orion. Previous radio observations have found that three other stars in the Kleinmann-Low Nebula (the Becklin-Neugebauer object and sources I and n) have high proper motions (5-14 mas yr-1) and were near a single location ˜540 years ago, and thus may have been members of a multiple system that dynamically decayed. The proper motion of source x is consistent with ejection from that same location 540 years ago, which provides strong evidence that the dynamical decay did occur and that the runaway star BN originated in the Kleinmann-Low Nebula rather than the nearby Trapezium cluster. However, our constraint on the motion of source n is significantly smaller than the most recent radio measurement, which indicates that it did not participate in the event that ejected the other three stars. Based on observations made with the NASA/ESA Hubble Space Telescope and the NASA Infrared Telescope Facility.

  15. Binaural Processing of Multiple Sound Sources

    DTIC Science & Technology

    2016-08-18

    Sound Source Localization Identification, and Sound Source Localization When Listeners Move. The CI research was also supported by an NIH grant...8217Cochlear Implant Performance in Realistic Listening Environments,’ Dr. Michael Dorman, Principal Investigator, Dr. William Yost unpaid advisor. The other... Listeners Move. The CI research was also supported by an NIH grant (“Cochlear Implant Performance in Realistic Listening Environments,” Dr. Michael Dorman

  16. Small Body Populations According to NEOWISE

    NASA Astrophysics Data System (ADS)

    Mainzer, A.

    The Wide-field Infrared Survey Explorer (WISE) surveyed the entire sky in four infrared wavelengths (3.4, 4.6, 12 and 22 microns) over the course of one year. From its sun-synchronous orbit, WISE imaged the entire sky multiple times with significant improvements in spatial resolution and sensitivity over its predecessor, the Infrared Astronomical Satellite. Enhancements to the WISE science data processing pipeline to support solar system science, collectively known as NEOWISE, enabled the indi- vidual exposures to be archived and new moving objects to be discovered. When the solid hydrogen used to cool the 12 and 22 micron detectors and telescope was depleted, NASA supported the continuation of the survey in the 3.4 and 4.6 micron bands for an additional four months to search for near-Earth objects and to complete a survey of the inner solar system. In total, NEOWISE detected more than 158,000 minor planets, including >34,000 new discoveries. This mid-infrared synoptic survey has resulted in range of scientific investigations throughout our solar system and beyond. Following one year of survey operations, the WISE spacecraft was put into hibernation in early 2011. NASA has recently opted to resurrect the mission as NEOWISE for the purpose of discovering and characterizing near-Earth objects.

  17. SU-E-T-64: A Programmable Moving Insert for the ArcCHECK Phantom for Dose Verification of Respiratory-Gated VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaede, S; Jordan, K; Western University, London, ON

    Purpose: To present a customized programmable moving insert for the ArcCHECK™ phantom that can, in a single delivery, check both entrance dosimetry, while simultaneously verifying the delivery of respiratory-gated VMAT. Methods: The cylindrical motion phantom uses a computer-controlled stepping motor to move an insert inside a stationery sleeve. Insert motion is programmable and can include rotational motion in addition to linear motion along the axis of the cylinder. The sleeve fits securely in the bore of the ArcCHECK™. Interchangeable inserts, including an A1SL chamber, optically-stimulated luminescence dosimeters, radiochromic film, or 3D gels, allow this combination to be used for commissioning,more » routine quality assurance, and patient-specific dosimetric verification of respiratory-gated VMAT. Before clinical implementation, the effect of a moving insert on the ArcCHECK™ measurements was considered. First, the measured dose to the ArcCHECK™ containing multiple inserts in the static position was compared to the calculated dose during multiple VMAT treatment deliveries. Then, dose was measured under both sinusoidal and real-patient motion conditions to determine any effect of the moving inserts on the ArcCHECK™ measurements. Finally, dose was measured during gated VMAT delivery to the same inserts under the same motion conditions to examine any effect of various beam “on-and-off” and dose rate ramp “up-and-down”. Multiple comparisons between measured and calculated dose to different inserts were also considered. Results: The pass rate for the static delivery exceeded 98% for all measurements (3%/3mm), suggesting a valid setup for entrance dosimetry. The pass rate was not altered for any measurement delivered under motion conditions. A similar Result was observed under gated VMAT conditions, including agreement of measured and calculated dose to the various inserts. Conclusion: Incorporating a programmable moving insert within the ArcCHECK™ phantom provides an efficient verification of respiratory-gated VMAT delivery that is useful during commissioning, routine quality assurance, and patient-specific dose verification. Prototype phantom development and testing was performed in collaboration with Modus Medical Devices Inc. (London, ON). No financial support was granted.« less

  18. Predictability and Robustness in the Manipulation of Dynamically Complex Objects

    PubMed Central

    Hasson, Christopher J.

    2017-01-01

    Manipulation of complex objects and tools is a hallmark of many activities of daily living, but how the human neuromotor control system interacts with such objects is not well understood. Even the seemingly simple task of transporting a cup of coffee without spilling creates complex interaction forces that humans need to compensate for. Predicting the behavior of an underactuated object with nonlinear fluid dynamics based on an internal model appears daunting. Hence, this research tests the hypothesis that humans learn strategies that make interactions predictable and robust to inaccuracies in neural representations of object dynamics. The task of moving a cup of coffee is modeled with a cart-and-pendulum system that is rendered in a virtual environment, where subjects interact with a virtual cup with a rolling ball inside using a robotic manipulandum. To gain insight into human control strategies, we operationalize predictability and robustness to permit quantitative theory-based assessment. Predictability is quantified by the mutual information between the applied force and the object dynamics; robustness is quantified by the energy margin away from failure. Three studies are reviewed that show how with practice subjects develop movement strategies that are predictable and robust. Alternative criteria, common for free movement, such as maximization of smoothness and minimization of force, do not account for the observed data. As manual dexterity is compromised in many individuals with neurological disorders, the experimental paradigm and its analyses are a promising platform to gain insights into neurological diseases, such as dystonia and multiple sclerosis, as well as healthy aging. PMID:28035560

  19. Reduced depth inversion illusions in schizophrenia are state-specific and occur for multiple object types and viewing conditions.

    PubMed

    Keane, Brian P; Silverstein, Steven M; Wang, Yushi; Papathomas, Thomas V

    2013-05-01

    Schizophrenia patients are less susceptible to depth inversion illusions (DIIs) in which concave faces appear as convex, but what stimulus attributes generate this effect and how does it vary with clinical state? To address these issues, we had 30 schizophrenia patients and 25 well-matched healthy controls make convexity judgments on physically concave faces and scenes. Patients were selectively sampled from three levels of care to ensure symptom heterogeneity. Half of the concave objects were painted with realistic texture to enhance the convexity illusion; the remaining objects were painted uniform beige to reduce the illusion. Subjects viewed the objects with one eye while laterally moving in front of the stimulus (to see depth via motion parallax) or with two eyes while remaining motionless (to see depth stereoscopically). For each group, DIIs were stronger with texture than without, and weaker with stereoscopic information than without, indicating that patients responded normally to stimulus alterations. More importantly, patients experienced fewer illusions than controls irrespective of the face/scene category, texture, or viewing condition (parallax/stereo). Illusions became less frequent as patients experienced more positive symptoms and required more structured treatment. Taken together, these results indicate that people with schizophrenia experience fewer DIIs with a variety of object types and viewing conditions, perhaps because of a lessened tendency to construe any type of object as convex. Moreover, positive symptoms and the need for structured treatment are associated with more accurate 3-D perception, suggesting that DII may serve as a state marker for the illness. © 2013 American Psychological Association

  20. Mervyn's Moving Mission.

    ERIC Educational Resources Information Center

    2001

    This teacher's resource packet includes a number of items designed to support teachers in the classroom before and after visiting Mervyn's Moving Mission. The packet includes eight sections: (1) welcome letter in English and Spanish; (2) summary timeline of California mission events in English and Spanish; (3) objectives and curriculum links; (4)…

  1. Laplacean Ideology for Preliminary Orbit Determination and Moving Celestial Body Identification in Virtual Epoch

    NASA Astrophysics Data System (ADS)

    Bykov, O. P.

    Any CCD frames with stars or galaxies or clusters and other images must be studied for a searching of moving celestial objects, namely asteroids, comets, artificial Earth satellites inside them. At Pulkovo Astronomical Observatory, new methods and software were elaborated to solve this problem.

  2. Optimization for Guitar Fingering on Single Notes

    NASA Astrophysics Data System (ADS)

    Itoh, Masaru; Hayashida, Takumi

    This paper presents an optimization method for guitar fingering. The fingering is to determine a unique combination of string, fret and finger corresponding to the note. The method aims to generate the best fingering pattern for guitar robots rather than beginners. Furthermore, it can be applied to any musical score on single notes. A fingering action can be decomposed into three motions, that is, a motion of press string, release string and move fretting hand. The cost for moving the hand is estimated on the basis of Manhattan distance which is the sum of distances along fret and string directions. The objective is to minimize the total fingering costs, subject to fret, string and finger constraints. As a sequence of notes on the score forms a line on time series, the optimization for guitar fingering can be resolved into a multistage decision problem. Dynamic programming is exceedingly effective to solve such a problem. A level concept is introduced into rendering states so as to make multiple DP solutions lead a unique one among the DP backward processes. For example, if two fingerings have the same value of cost at different states on a stage, then the low position would be taken precedence over the high position, and the index finger would be over the middle finger.

  3. Health sciences libraries building survey, 1999–2009

    PubMed Central

    Ludwig, Logan

    2010-01-01

    Objective: A survey was conducted of health sciences libraries to obtain information about newer buildings, additions, remodeling, and renovations. Method: An online survey was developed, and announcements of survey availability posted to three major email discussion lists: Medical Library Association (MLA), Association of Academic Health Sciences Libraries (AAHSL), and MEDLIB-L. Previous discussions of library building projects on email discussion lists, a literature review, personal communications, and the author's consulting experiences identified additional projects. Results: Seventy-eight health sciences library building projects at seventy-three institutions are reported. Twenty-two are newer facilities built within the last ten years; two are space expansions; forty-five are renovation projects; and nine are combinations of new and renovated space. Six institutions report multiple or ongoing renovation projects during the last ten years. Conclusions: The survey results confirm a continuing migration from print-based to digitally based collections and reveal trends in library space design. Some health sciences libraries report loss of space as they move toward creating space for “community” building. Libraries are becoming more proactive in using or retooling space for concentration, collaboration, contemplation, communication, and socialization. All are moving toward a clearer operational vision of the library as the institution's information nexus and not merely as a physical location with print collections. PMID:20428277

  4. Suitability of digital camcorders for virtual reality image data capture

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola; Maas, Hans-Gerd

    1998-12-01

    Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.

  5. Velocity and Structure Estimation of a Moving Object Using a Moving Monocular Camera

    DTIC Science & Technology

    2006-01-01

    map the Euclidean position of static landmarks or visual features in the environment . Recent applications of this technique include aerial...From Motion in a Piecewise Planar Environment ,” International Journal of Pattern Recognition and Artificial Intelligence, Vol. 2, No. 3, pp. 485-508...1988. [9] J. M. Ferryman, S. J. Maybank , and A. D. Worrall, “Visual Surveil- lance for Moving Vehicles,” Intl. Journal of Computer Vision, Vol. 37, No

  6. Method for separating video camera motion from scene motion for constrained 3D displacement measurements

    NASA Astrophysics Data System (ADS)

    Gauthier, L. R.; Jansen, M. E.; Meyer, J. R.

    2014-09-01

    Camera motion is a potential problem when a video camera is used to perform dynamic displacement measurements. If the scene camera moves at the wrong time, the apparent motion of the object under study can easily be confused with the real motion of the object. In some cases, it is practically impossible to prevent camera motion, as for instance, when a camera is used outdoors in windy conditions. A method to address this challenge is described that provides an objective means to measure the displacement of an object of interest in the scene, even when the camera itself is moving in an unpredictable fashion at the same time. The main idea is to synchronously measure the motion of the camera and to use those data ex post facto to subtract out the apparent motion in the scene that is caused by the camera motion. The motion of the scene camera is measured by using a reference camera that is rigidly attached to the scene camera and oriented towards a stationary reference object. For instance, this reference object may be on the ground, which is known to be stationary. It is necessary to calibrate the reference camera by simultaneously measuring the scene images and the reference images at times when it is known that the scene object is stationary and the camera is moving. These data are used to map camera movement data to apparent scene movement data in pixel space and subsequently used to remove the camera movement from the scene measurements.

  7. Dynamic coupling of regional atmosphere to biosphere in the new generation regional climate system model REMO-iMOVE

    NASA Astrophysics Data System (ADS)

    Wilhelm, C.; Rechid, D.; Jacob, D.

    2013-05-01

    The main objective of this study is the coupling of the regional climate model REMO to a 3rd generation land surface scheme and the evaluation of the new model version of REMO, called REMO with interactive MOsaic-based VEgetation: REMO-iMOVE. Attention is paid to the documentation of the technical aspects of the new model constituents and the coupling mechanism. We compare simulation results of REMO-iMOVE and of the reference version REMO2009, to investigate the sensitivity of the regional model to the new land surface scheme. An 11 yr climate model run (1995-2005), forced with ECMWF ERA-Interim lateral boundary conditions, over Europe in 0.44° resolution of both model versions was carried out, to represent present day European climate. The result of these experiments are compared to multiple temperature, precipitation, heat flux and leaf area index observation data, to determine the differences in the model versions. The new model version has further the ability to model net primary productivity for the given plant functional types. This new feature is thoroughly evaluated by literature values of net primary productivity of different plant species in European climatic regions. The new model version REMO-iMOVE is able to model the European climate in the same quality as the parent model version REMO2009 does. The differences in the results of the two model versions stem from the differences in the dynamics of vegetation cover and density and can be distinct in some regions, due to the influences of these parameters to the surface heat and moisture fluxes. The modeled inter-annual variability in the phenology as well as the net primary productivity lays in the range of observations and literature values for most European regions. This study also reveals the need for a more sophisticated soil moisture representation in the newly developed model version REMO-iMOVE to be able to treat the differences in plant functional types. This gets especially important if the model will be used in dynamic vegetation studies.

  8. Solution to the SLAM problem in low dynamic environments using a pose graph and an RGB-D sensor.

    PubMed

    Lee, Donghwa; Myung, Hyun

    2014-07-11

    In this study, we propose a solution to the simultaneous localization and mapping (SLAM) problem in low dynamic environments by using a pose graph and an RGB-D (red-green-blue depth) sensor. The low dynamic environments refer to situations in which the positions of objects change over long intervals. Therefore, in the low dynamic environments, robots have difficulty recognizing the repositioning of objects unlike in highly dynamic environments in which relatively fast-moving objects can be detected using a variety of moving object detection algorithms. The changes in the environments then cause groups of false loop closing when the same moved objects are observed for a while, which means that conventional SLAM algorithms produce incorrect results. To address this problem, we propose a novel SLAM method that handles low dynamic environments. The proposed method uses a pose graph structure and an RGB-D sensor. First, to prune the falsely grouped constraints efficiently, nodes of the graph, that represent robot poses, are grouped according to the grouping rules with noise covariances. Next, false constraints of the pose graph are pruned according to an error metric based on the grouped nodes. The pose graph structure is reoptimized after eliminating the false information, and the corrected localization and mapping results are obtained. The performance of the method was validated in real experiments using a mobile robot system.

  9. Reference Directions and Reference Objects in Spatial Memory of a Briefly Viewed Layout

    ERIC Educational Resources Information Center

    Mou, Weimin; Xiao, Chengli; McNamara, Timothy P.

    2008-01-01

    Two experiments investigated participants' spatial memory of a briefly viewed layout. Participants saw an array of five objects on a table and, after a short delay, indicated whether the target object indicated by the experimenter had been moved. Experiment 1 showed that change detection was more accurate when non-target objects were stationary…

  10. Magnetically Operated Holding Plate And Ball-Lock Pin

    NASA Technical Reports Server (NTRS)

    Monford, Leo G., Jr.

    1992-01-01

    Magnetically operated holding plate and ball-locking-pin mechanism part of object attached to, or detached from second object. Mechanism includes tubular housing inserted in hole in second object. Plunger moves inside tube forcing balls to protrude from sides. Balls prevent tube from sliding out of second object. Simpler, less expensive than motorized latches; suitable for robotics applications.

  11. [Model of multiple seasonal autoregressive integrated moving average model and its application in prediction of the hand-foot-mouth disease incidence in Changsha].

    PubMed

    Tan, Ting; Chen, Lizhang; Liu, Fuqiang

    2014-11-01

    To establish multiple seasonal autoregressive integrated moving average model (ARIMA) according to the hand-foot-mouth disease incidence in Changsha, and to explore the feasibility of the multiple seasonal ARIMA in predicting the hand-foot-mouth disease incidence. EVIEWS 6.0 was used to establish multiple seasonal ARIMA according to the hand-foot- mouth disease incidence from May 2008 to August 2013 in Changsha, and the data of the hand- foot-mouth disease incidence from September 2013 to February 2014 were served as the examined samples of the multiple seasonal ARIMA, then the errors were compared between the forecasted incidence and the real value. Finally, the incidence of hand-foot-mouth disease from March 2014 to August 2014 was predicted by the model. After the data sequence was handled by smooth sequence, model identification and model diagnosis, the multiple seasonal ARIMA (1, 0, 1)×(0, 1, 1)12 was established. The R2 value of the model fitting degree was 0.81, the root mean square prediction error was 8.29 and the mean absolute error was 5.83. The multiple seasonal ARIMA is a good prediction model, and the fitting degree is good. It can provide reference for the prevention and control work in hand-foot-mouth disease.

  12. Simultaneously Discovering and Localizing Common Objects in Wild Images.

    PubMed

    Wang, Zhenzhen; Yuan, Junsong

    2018-09-01

    Motivated by the recent success of supervised and weakly supervised common object discovery, in this paper, we move forward one step further to tackle common object discovery in a fully unsupervised way. Generally, object co-localization aims at simultaneously localizing objects of the same class across a group of images. Traditional object localization/detection usually trains specific object detectors which require bounding box annotations of object instances, or at least image-level labels to indicate the presence/absence of objects in an image. Given a collection of images without any annotations, our proposed fully unsupervised method is to simultaneously discover images that contain common objects and also localize common objects in corresponding images. Without requiring to know the total number of common objects, we formulate this unsupervised object discovery as a sub-graph mining problem from a weighted graph of object proposals, where nodes correspond to object proposals, and edges represent the similarities between neighbouring proposals. The positive images and common objects are jointly discovered by finding sub-graphs of strongly connected nodes, with each sub-graph capturing one object pattern. The optimization problem can be efficiently solved by our proposed maximal-flow-based algorithm. Instead of assuming that each image contains only one common object, our proposed solution can better address wild images where each image may contain multiple common objects or even no common object. Moreover, our proposed method can be easily tailored to the task of image retrieval in which the nodes correspond to the similarity between query and reference images. Extensive experiments on PASCAL VOC 2007 and Object Discovery data sets demonstrate that even without any supervision, our approach can discover/localize common objects of various classes in the presence of scale, view point, appearance variation, and partial occlusions. We also conduct broad experiments on image retrieval benchmarks, Holidays and Oxford5k data sets, to show that our proposed method, which considers both the similarity between query and reference images and also similarities among reference images, can help to improve the retrieval results significantly.

  13. A Rapidly Moving Shell in the Orion Nebula

    NASA Technical Reports Server (NTRS)

    Walter, Donald K.; O'Dell, C. R.; Hu, Xihai; Dufour, Reginald J.

    1995-01-01

    A well-resolved elliptical shell in the inner Orion Nebula has been investigated by monochromatic imaging plus high- and low-resolution spectroscopy. We find that it is of low ionization and the two bright ends are moving at -39 and -49 km/s with respect to OMC-1. There is no central object, even in the infrared J bandpass although H2 emission indicates a possible association with the nearby very young pre-main-sequence star J&W 352, which is one of the youngest pre-main-sequence stars in the inner Orion Nebula. Many of the characteristics of this object (low ionization, blue shift) are like those of the Herbig-Haro objects, although the symmetric form would make it an unusual member of that class.

  14. Automatic Recognition Of Moving Objects And Its Application To A Robot For Picking Asparagus

    NASA Astrophysics Data System (ADS)

    Baylou, P.; Amor, B. El Hadj; Bousseau, G.

    1983-10-01

    After a brief description of the robot for picking white asparagus, a statistical study of the different shapes of asparagus tips allowed us to determine certain discriminating parameters to detect the tips as they appear on the silhouette of the mound of earth. The localisation was done stereometrically with the help of two cameras. As the robot carrying the system of vision-localisation moves, the images are altered and decision cri-teria modified. A study of the image from mobile objects produced by both tube and CCD came-ras was carried out. A simulation of this phenomenon has been achieved in order to determine the modifications concerning object shapes, thresholding levels and decision parameters in function of the robot speed.

  15. Management of thinned Emory oak coppice for multiple resource benefits

    Treesearch

    D. Catlow Shipek; Peter F. Ffolliott

    2005-01-01

    Managers are increasingly moving toward an ecosystem-based, multiple-use approach in managing Emory oak woodlands in the Southwestern United States. Often of particular interest is managing the coppice that evolves from earlier fuelwood harvesting activities. Emory oak (Quercus emoryi) is a prolific sprouting species and, as a consequence, post-...

  16. Diode probes for spatiotemporal optical control of multiple neurons in freely moving animals

    PubMed Central

    Koos, Tibor; Buzsáki, György

    2012-01-01

    Neuronal control with high temporal precision is possible with optogenetics, yet currently available methods do not enable to control independently multiple locations in the brains of freely moving animals. Here, we describe a diode-probe system that allows real-time and location-specific control of neuronal activity at multiple sites. Manipulation of neuronal activity in arbitrary spatiotemporal patterns is achieved by means of an optoelectronic array, manufactured by attaching multiple diode-fiber assemblies to high-density silicon probes or wire tetrodes and implanted into the brains of animals that are expressing light-responsive opsins. Each diode can be controlled separately, allowing localized light stimulation of neuronal activators and silencers in any temporal configuration and concurrent recording of the stimulated neurons. Because the only connections to the animals are via a highly flexible wire cable, unimpeded behavior is allowed for circuit monitoring and multisite perturbations in the intact brain. The capacity of the system to generate unique neural activity patterns facilitates multisite manipulation of neural circuits in a closed-loop manner and opens the door to addressing novel questions. PMID:22496529

  17. Responses of horses offered a choice between stables containing single or multiple forages.

    PubMed

    Goodwin, D; Davidson, H P B; Harris, P

    2007-04-21

    To investigate the choices of foraging location of horses, 10 to 12 horses were introduced for five minutes into each of two similar stables containing a single forage or six forages, in four replicated trials. The horses were then removed and released into the gangway between the stables, and allowed five minutes to choose between the stables. Their initial and final choices, mean duration in each stable and proportional frequency of change of location were compared. Most of the horses initially entered the closest stable on release (P<0.05); if the closest stable contained a single hay, most horses transferred to the stable containing multiple forages (P<0.001). The length of time spent by the horses in the two stables suggested that they preferred multiple forages in multiple locations (P<0.001). Eleven horses moved from one stable to the other on one or more occasions during trials when hay or a preferred forage was available in both stables, possibly indicating a motivation to move between foraging locations regardless of the palatability of the forages offered or the horses' preference for a forage.

  18. CCD high-speed videography system with new concepts and techniques

    NASA Astrophysics Data System (ADS)

    Zheng, Zengrong; Zhao, Wenyi; Wu, Zhiqiang

    1997-05-01

    A novel CCD high speed videography system with brand-new concepts and techniques is developed by Zhejiang University recently. The system can send a series of short flash pulses to the moving object. All of the parameters, such as flash numbers, flash durations, flash intervals, flash intensities and flash colors, can be controlled according to needs by the computer. A series of moving object images frozen by flash pulses, carried information of moving object, are recorded by a CCD video camera, and result images are sent to a computer to be frozen, recognized and processed with special hardware and software. Obtained parameters can be displayed, output as remote controlling signals or written into CD. The highest videography frequency is 30,000 images per second. The shortest image freezing time is several microseconds. The system has been applied to wide fields of energy, chemistry, medicine, biological engineering, aero- dynamics, explosion, multi-phase flow, mechanics, vibration, athletic training, weapon development and national defense engineering. It can also be used in production streamline to carry out the online, real-time monitoring and controlling.

  19. Swing-free transport of suspended loads. Summer research report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basher, A.M.H.

    1996-02-01

    Transportation of large objects using traditional bridge crane can induce pendulum motion (swing) of the object. In environments such as factory the energy contained in the swinging mass can be large and therefore attempts to move the mass onto target while still swinging can cause considerable damage. Oscillations must be damped or allowed to decay before the next process can take place. Stopping the swing can be accomplished by moving the bridge in a manner to counteract the swing which sometimes can be done by skilled operator, or by waiting for the swing to damp sufficiently that the object canmore » be moved to the target without risk of damage. One of the methods that can be utilized for oscillation suppression is input preshaping. The validity of this method depends on the exact knowledge of the system dynamics. This method can be modified to provide some degrees of robustness with respect to unknown dynamics but at the cost of the speed of transient response. This report describes investigations on the development of a controller to dampen the oscillations.« less

  20. Study of moving object detecting and tracking algorithm for video surveillance system

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Zhang, Rongfu

    2010-10-01

    This paper describes a specific process of moving target detecting and tracking in the video surveillance.Obtain high-quality background is the key to achieving differential target detecting in the video surveillance.The paper is based on a block segmentation method to build clear background,and using the method of background difference to detecing moving target,after a series of treatment we can be extracted the more comprehensive object from original image,then using the smallest bounding rectangle to locate the object.In the video surveillance system, the delay of camera and other reasons lead to tracking lag,the model of Kalman filter based on template matching was proposed,using deduced and estimated capacity of Kalman,the center of smallest bounding rectangle for predictive value,predicted the position in the next moment may appare,followed by template matching in the region as the center of this position,by calculate the cross-correlation similarity of current image and reference image,can determine the best matching center.As narrowed the scope of searching,thereby reduced the searching time,so there be achieve fast-tracking.

Top