Sample records for moving object environments

  1. Robot environment expert system

    NASA Technical Reports Server (NTRS)

    Potter, J. L.

    1985-01-01

    The Robot Environment Expert System uses a hexidecimal tree data structure to model a complex robot environment where not only the robot arm moves, but also the robot itself and other objects may move. The hextree model allows dynamic updating, collision avoidance and path planning over time, to avoid moving objects.

  2. Remote sensing using MIMO systems

    DOEpatents

    Bikhazi, Nicolas; Young, William F; Nguyen, Hung D

    2015-04-28

    A technique for sensing a moving object within a physical environment using a MIMO communication link includes generating a channel matrix based upon channel state information of the MIMO communication link. The physical environment operates as a communication medium through which communication signals of the MIMO communication link propagate between a transmitter and a receiver. A spatial information variable is generated for the MIMO communication link based on the channel matrix. The spatial information variable includes spatial information about the moving object within the physical environment. A signature for the moving object is generated based on values of the spatial information variable accumulated over time. The moving object is identified based upon the signature.

  3. A mobile agent-based moving objects indexing algorithm in location based service

    NASA Astrophysics Data System (ADS)

    Fang, Zhixiang; Li, Qingquan; Xu, Hong

    2006-10-01

    This paper will extends the advantages of location based service, specifically using their ability to management and indexing the positions of moving object, Moreover with this objective in mind, a mobile agent-based moving objects indexing algorithm is proposed in this paper to efficiently process indexing request and acclimatize itself to limitation of location based service environment. The prominent feature of this structure is viewing moving object's behavior as the mobile agent's span, the unique mapping between the geographical position of moving objects and span point of mobile agent is built to maintain the close relationship of them, and is significant clue for mobile agent-based moving objects indexing to tracking moving objects.

  4. Mining moving object trajectories in location-based services for spatio-temporal database update

    NASA Astrophysics Data System (ADS)

    Guo, Danhuai; Cui, Weihong

    2008-10-01

    Advances in wireless transmission and mobile technology applied to LBS (Location-based Services) flood us with amounts of moving objects data. Vast amounts of gathered data from position sensors of mobile phones, PDAs, or vehicles hide interesting and valuable knowledge and describe the behavior of moving objects. The correlation between temporal moving patterns of moving objects and geo-feature spatio-temporal attribute was ignored, and the value of spatio-temporal trajectory data was not fully exploited too. Urban expanding or frequent town plan change bring about a large amount of outdated or imprecise data in spatial database of LBS, and they cannot be updated timely and efficiently by manual processing. In this paper we introduce a data mining approach to movement pattern extraction of moving objects, build a model to describe the relationship between movement patterns of LBS mobile objects and their environment, and put up with a spatio-temporal database update strategy in LBS database based on trajectories spatiotemporal mining. Experimental evaluation reveals excellent performance of the proposed model and strategy. Our original contribution include formulation of model of interaction between trajectory and its environment, design of spatio-temporal database update strategy based on moving objects data mining, and the experimental application of spatio-temporal database update by mining moving objects trajectories.

  5. Detecting and Analyzing Multiple Moving Objects in Crowded Environments with Coherent Motion Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheriyadat, Anil M.

    Understanding the world around us from large-scale video data requires vision systems that can perform automatic interpretation. While human eyes can unconsciously perceive independent objects in crowded scenes and other challenging operating environments, automated systems have difficulty detecting, counting, and understanding their behavior in similar scenes. Computer scientists at ORNL have a developed a technology termed as "Coherent Motion Region Detection" that invloves identifying multiple indepedent moving objects in crowded scenes by aggregating low-level motion cues extracted from moving objects. Humans and other species exploit such low-level motion cues seamlessely to perform perceptual grouping for visual understanding. The algorithm detectsmore » and tracks feature points on moving objects resulting in partial trajectories that span coherent 3D region in the space-time volume defined by the video. In the case of multi-object motion, many possible coherent motion regions can be constructed around the set of trajectories. The unique approach in the algorithm is to identify all possible coherent motion regions, then extract a subset of motion regions based on an innovative measure to automatically locate moving objects in crowded environments.The software reports snapshot of the object, count, and derived statistics ( count over time) from input video streams. The software can directly process videos streamed over the internet or directly from a hardware device (camera).« less

  6. Real-time object detection, tracking and occlusion reasoning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Divakaran, Ajay; Yu, Qian; Tamrakar, Amir

    A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.

  7. Virtual hand: a 3D tactile interface to virtual environments

    NASA Astrophysics Data System (ADS)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  8. Solution to the SLAM problem in low dynamic environments using a pose graph and an RGB-D sensor.

    PubMed

    Lee, Donghwa; Myung, Hyun

    2014-07-11

    In this study, we propose a solution to the simultaneous localization and mapping (SLAM) problem in low dynamic environments by using a pose graph and an RGB-D (red-green-blue depth) sensor. The low dynamic environments refer to situations in which the positions of objects change over long intervals. Therefore, in the low dynamic environments, robots have difficulty recognizing the repositioning of objects unlike in highly dynamic environments in which relatively fast-moving objects can be detected using a variety of moving object detection algorithms. The changes in the environments then cause groups of false loop closing when the same moved objects are observed for a while, which means that conventional SLAM algorithms produce incorrect results. To address this problem, we propose a novel SLAM method that handles low dynamic environments. The proposed method uses a pose graph structure and an RGB-D sensor. First, to prune the falsely grouped constraints efficiently, nodes of the graph, that represent robot poses, are grouped according to the grouping rules with noise covariances. Next, false constraints of the pose graph are pruned according to an error metric based on the grouped nodes. The pose graph structure is reoptimized after eliminating the false information, and the corrected localization and mapping results are obtained. The performance of the method was validated in real experiments using a mobile robot system.

  9. Motor effects from visually induced disorientation in man.

    DOT National Transportation Integrated Search

    1969-11-01

    The problem of disorientation in a moving optical environment was examined. Egocentric disorientation can be experienced by a pilot if the entire visual environment moves relative to his body without a clue of the objective position of the airplane i...

  10. Synchronizing Self and Object Movement: How Child and Adult Cyclists Intercept Moving Gaps in a Virtual Environment

    ERIC Educational Resources Information Center

    Chihak, Benjamin J.; Plumert, Jodie M.; Ziemer, Christine J.; Babu, Sabarish; Grechkin, Timofey; Cremer, James F.; Kearney, Joseph K.

    2010-01-01

    Two experiments examined how 10- and 12-year-old children and adults intercept moving gaps while bicycling in an immersive virtual environment. Participants rode an actual bicycle along a virtual roadway. At 12 test intersections, participants attempted to pass through a gap between 2 moving, car-sized blocks without stopping. The blocks were…

  11. Captive Bottlenose Dolphins (Tursiops truncatus) Spontaneously Using Water Flow to Manipulate Objects

    PubMed Central

    Yamamoto, Chisato; Furuta, Keisuke; Taki, Michihiro; Morisaka, Tadamichi

    2014-01-01

    Several terrestrial animals and delphinids manipulate objects in a tactile manner, using parts of their bodies, such as their mouths or hands. In this paper, we report that bottlenose dolphins (Tursiops truncatus) manipulate objects not by direct bodily contact, but by spontaneous water flow. Three of four dolphins at Suma Aqualife Park performed object manipulation with food. The typical sequence of object manipulation consisted of a three step procedure. First, the dolphins released the object from the sides of their mouths while assuming a head-down posture near the floor. They then manipulated the object around their mouths and caught it. Finally, they ceased to engage in their head-down posture and started to swim. When the dolphins moved the object, they used the water current in the pool or moved their head. These results showed that dolphins manipulate objects using movements that do not directly involve contact between a body part and the object. In the event the dolphins dropped the object on the floor, they lifted it by making water flow in one of three methods: opening and closing their mouths repeatedly, moving their heads lengthwise, or making circular head motions. This result suggests that bottlenose dolphins spontaneously change their environment to manipulate objects. The reason why aquatic animals like dolphins do object manipulation by changing their environment but terrestrial animals do not may be that the viscosity of the aquatic environment is much higher than it is in terrestrial environments. This is the first report thus far of any non-human mammal engaging in object manipulation using several methods to change their environment. PMID:25250625

  12. Motor Effects from Visually Induced Disorientation in Man.

    ERIC Educational Resources Information Center

    Brecher, M. Herbert; Brecher, Gerhard A.

    The problem of disorientation in a moving optical environment was examined. A pilot can experience egocentric disorientation if the entire visual environment moves relative to his body without a clue as to the objectives position of the airplane in respect to the ground. A simple method of measuring disorientation was devised. In this method…

  13. Velocity and Structure Estimation of a Moving Object Using a Moving Monocular Camera

    DTIC Science & Technology

    2006-01-01

    map the Euclidean position of static landmarks or visual features in the environment . Recent applications of this technique include aerial...From Motion in a Piecewise Planar Environment ,” International Journal of Pattern Recognition and Artificial Intelligence, Vol. 2, No. 3, pp. 485-508...1988. [9] J. M. Ferryman, S. J. Maybank , and A. D. Worrall, “Visual Surveil- lance for Moving Vehicles,” Intl. Journal of Computer Vision, Vol. 37, No

  14. Self-motion impairs multiple-object tracking.

    PubMed

    Thomas, Laura E; Seiffert, Adriane E

    2010-10-01

    Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement impairs the ability to keep track of other moving objects. Participants attempted to track multiple targets while either moving around the tracking area or remaining in a fixed location. Participants' tracking performance was impaired when they moved to a new location during tracking, even when they were passively moved and when they did not see a shift in viewpoint. Self-motion impaired multiple-object tracking in both an immersive virtual environment and a real-world analog, but did not interfere with a difficult non-spatial tracking task. These results suggest that people use a common mechanism to track changes both to the location of moving objects around them and to keep track of their own location. Copyright 2010 Elsevier B.V. All rights reserved.

  15. Evaluation of Content-Matched Range Monitoring Queries over Moving Objects in Mobile Computing Environments

    PubMed Central

    Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo

    2015-01-01

    A content-matched (CM) range monitoring query over moving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CM range monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods. PMID:26393613

  16. Acoustic facilitation of object movement detection during self-motion

    PubMed Central

    Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.

    2011-01-01

    In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050

  17. Memory-Based Multiagent Coevolution Modeling for Robust Moving Object Tracking

    PubMed Central

    Wang, Yanjiang; Qi, Yujuan; Li, Yongping

    2013-01-01

    The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods. PMID:23843739

  18. Memory-based multiagent coevolution modeling for robust moving object tracking.

    PubMed

    Wang, Yanjiang; Qi, Yujuan; Li, Yongping

    2013-01-01

    The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods.

  19. Evaluation of Content-Matched Range Monitoring Queries over Moving Objects in Mobile Computing Environments.

    PubMed

    Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo

    2015-09-18

    A content-matched (CM) rangemonitoring query overmoving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CMrange monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods.

  20. Detection of dominant flow and abnormal events in surveillance video

    NASA Astrophysics Data System (ADS)

    Kwak, Sooyeong; Byun, Hyeran

    2011-02-01

    We propose an algorithm for abnormal event detection in surveillance video. The proposed algorithm is based on a semi-unsupervised learning method, a kind of feature-based approach so that it does not detect the moving object individually. The proposed algorithm identifies dominant flow without individual object tracking using a latent Dirichlet allocation model in crowded environments. It can also automatically detect and localize an abnormally moving object in real-life video. The performance tests are taken with several real-life databases, and their results show that the proposed algorithm can efficiently detect abnormally moving objects in real time. The proposed algorithm can be applied to any situation in which abnormal directions or abnormal speeds are detected regardless of direction.

  1. Multisensory Integration of Visual and Vestibular Signals Improves Heading Discrimination in the Presence of a Moving Object

    PubMed Central

    Dokka, Kalpana; DeAngelis, Gregory C.

    2015-01-01

    Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion. PMID:26446214

  2. A graphical, rule based robotic interface system

    NASA Technical Reports Server (NTRS)

    Mckee, James W.; Wolfsberger, John

    1988-01-01

    The ability of a human to take control of a robotic system is essential in any use of robots in space in order to handle unforeseen changes in the robot's work environment or scheduled tasks. But in cases in which the work environment is known, a human controlling a robot's every move by remote control is both time consuming and frustrating. A system is needed in which the user can give the robotic system commands to perform tasks but need not tell the system how. To be useful, this system should be able to plan and perform the tasks faster than a telerobotic system. The interface between the user and the robot system must be natural and meaningful to the user. A high level user interface program under development at the University of Alabama, Huntsville, is described. A graphical interface is proposed in which the user selects objects to be manipulated by selecting representations of the object on projections of a 3-D model of the work environment. The user may move in the work environment by changing the viewpoint of the projections. The interface uses a rule based program to transform user selection of items on a graphics display of the robot's work environment into commands for the robot. The program first determines if the desired task is possible given the abilities of the robot and any constraints on the object. If the task is possible, the program determines what movements the robot needs to make to perform the task. The movements are transformed into commands for the robot. The information defining the robot, the work environment, and how objects may be moved is stored in a set of data bases accessible to the program and displayable to the user.

  3. Using articulated scene models for dynamic 3d scene analysis in vista spaces

    NASA Astrophysics Data System (ADS)

    Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven

    2010-09-01

    In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.

  4. The temporal dynamics of heading perception in the presence of moving objects

    PubMed Central

    Fajen, Brett R.

    2015-01-01

    Many forms of locomotion rely on the ability to accurately perceive one's direction of locomotion (i.e., heading) based on optic flow. Although accurate in rigid environments, heading judgments may be biased when independently moving objects are present. The aim of this study was to systematically investigate the conditions in which moving objects influence heading perception, with a focus on the temporal dynamics and the mechanisms underlying this bias. Subjects viewed stimuli simulating linear self-motion in the presence of a moving object and judged their direction of heading. Experiments 1 and 2 revealed that heading perception is biased when the object crosses or almost crosses the observer's future path toward the end of the trial, but not when the object crosses earlier in the trial. Nonetheless, heading perception is not based entirely on the instantaneous optic flow toward the end of the trial. This was demonstrated in Experiment 3 by varying the portion of the earlier part of the trial leading up to the last frame that was presented to subjects. When the stimulus duration was long enough to include the part of the trial before the moving object crossed the observer's path, heading judgments were less biased. The findings suggest that heading perception is affected by the temporal evolution of optic flow. The time course of dorsal medial superior temporal area (MSTd) neuron responses may play a crucial role in perceiving heading in the presence of moving objects, a property not captured by many existing models. PMID:26510765

  5. Flash-lag effect: complicating motion extrapolation of the moving reference-stimulus paradoxically augments the effect.

    PubMed

    Bachmann, Talis; Murd, Carolina; Põder, Endel

    2012-09-01

    One fundamental property of the perceptual and cognitive systems is their capacity for prediction in the dynamic environment; the flash-lag effect has been considered as a particularly suggestive example of this capacity (Nijhawan in nature 370:256-257, 1994, Behav brain sci 31:179-239, 2008). Thus, because of involvement of the mechanisms of extrapolation and visual prediction, the moving object is perceived ahead of the simultaneously flashed static object objectively aligned with the moving one. In the present study we introduce a new method and report experimental results inconsistent with at least some versions of the prediction/extrapolation theory. We show that a stimulus moving in the opposite direction to the reference stimulus by approaching it before the flash does not diminish the flash-lag effect, but rather augments it. In addition, alternative theories (in)capable of explaining this paradoxical result are discussed.

  6. Vision robot with rotational camera for searching ID tags

    NASA Astrophysics Data System (ADS)

    Kimura, Nobutaka; Moriya, Toshio

    2008-02-01

    We propose a new concept, called "real world crawling", in which intelligent mobile sensors completely recognize environments by actively gathering information in those environments and integrating that information on the basis of location. First we locate objects by widely and roughly scanning the entire environment with these mobile sensors, and we check the objects in detail by moving the sensors to find out exactly what and where they are. We focused on the automation of inventory counting with barcodes as an application of our concept. We developed "a barcode reading robot" which autonomously moved in a warehouse. It located and read barcode ID tags using a camera and a barcode reader while moving. However, motion blurs caused by the robot's translational motion made it difficult to recognize the barcodes. Because of the high computational cost of image deblurring software, we used the pan rotation of the camera to reduce these blurs. We derived the appropriate pan rotation velocity from the robot's translational velocity and from the distance to the surfaces of barcoded boxes. We verified the effectiveness of our method in an experimental test.

  7. Taking the Plunge: Districts Leap into Virtualization

    ERIC Educational Resources Information Center

    Demski, Jennifer

    2010-01-01

    Moving from a traditional desktop computing environment to a virtualized solution is a daunting task. In this article, the author presents case histories of three districts that have made the conversion to virtual computing to learn about their experiences: What prompted them to make the move, and what were their objectives? Which obstacles prove…

  8. Event memory and moving in a well-known environment.

    PubMed

    Tamplin, Andrea K; Krawietz, Sabine A; Radvansky, Gabriel A; Copeland, David E

    2013-11-01

    Research in narrative comprehension has repeatedly shown that when people read about characters moving in well-known environments, the accessibility of object information follows a spatial gradient. That is, the accessibility of objects is best when they are in the same room as the protagonist, and it becomes worse the farther away they are see, e.g., Morrow, Greenspan, & Bower, (Journal of Memory and Language, 26, 165-187, 1987). In the present study, we assessed this finding using an interactive environment in which we had people memorize a map and navigate a virtual simulation of the area. During navigation, people were probed with pairs of object names and indicated whether both objects were in the same room. In contrast to the narrative studies described above, several experiments showed no evidence of a clear spatial gradient. Instead, memory for objects in currently occupied locations (e.g., the location room) was more accessible, especially after a small delay, but no clear decline was evident in the accessibility of information in memory with increased distance. Also, memory for objects along the pathway of movement (i.e., rooms that a person only passed through) showed a transitory suppression effect that was present immediately after movement, but attenuated over time. These results were interpreted in light of the event horizon model of event cognition.

  9. Neurally and ocularly informed graph-based models for searching 3D environments

    NASA Astrophysics Data System (ADS)

    Jangraw, David C.; Wang, Jun; Lance, Brent J.; Chang, Shih-Fu; Sajda, Paul

    2014-08-01

    Objective. As we move through an environment, we are constantly making assessments, judgments and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions—our implicit ‘labeling’ of the world. In this paper, we use physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3D environment. Approach. First, we record electroencephalographic (EEG), saccadic and pupillary data from subjects as they move through a small part of a 3D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest to them. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to the labeled ones. Finally, the system plots an efficient route to help the subjects visit the ‘similar’ objects it identifies. Main results. We show that by exploiting the subjects’ implicit labeling to find objects of interest instead of exploring naively, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers’ inference of subjects’ implicit labeling. Significance. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user’s interests.

  10. How Many Objects are You Worth? Quantification of the Self-Motion Load on Multiple Object Tracking

    PubMed Central

    Thomas, Laura E.; Seiffert, Adriane E.

    2011-01-01

    Perhaps walking and chewing gum is effortless, but walking and tracking moving objects is not. Multiple object tracking is impaired by walking from one location to another, suggesting that updating location of the self puts demands on object tracking processes. Here, we quantified the cost of self-motion in terms of the tracking load. Participants in a virtual environment tracked a variable number of targets (1–5) among distractors while either staying in one place or moving along a path that was similar to the objects’ motion. At the end of each trial, participants decided whether a probed dot was a target or distractor. As in our previous work, self-motion significantly impaired performance in tracking multiple targets. Quantifying tracking capacity for each individual under move versus stay conditions further revealed that self-motion during tracking produced a cost to capacity of about 0.8 (±0.2) objects. Tracking your own motion is worth about one object, suggesting that updating the location of the self is similar, but perhaps slightly easier, than updating locations of objects. PMID:21991259

  11. Coordinated control of micro-grid based on distributed moving horizon control.

    PubMed

    Ma, Miaomiao; Shao, Liyang; Liu, Xiangjie

    2018-05-01

    This paper proposed the distributed moving horizon coordinated control scheme for the power balance and economic dispatch problems of micro-grid based on distributed generation. We design the power coordinated controller for each subsystem via moving horizon control by minimizing a suitable objective function. The objective function of distributed moving horizon coordinated controller is chosen based on the principle that wind power subsystem has the priority to generate electricity while photovoltaic power generation coordinates with wind power subsystem and the battery is only activated to meet the load demand when necessary. The simulation results illustrate that the proposed distributed moving horizon coordinated controller can allocate the output power of two generation subsystems reasonably under varying environment conditions, which not only can satisfy the load demand but also limit excessive fluctuations of output power to protect the power generation equipment. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Using "Second Life" in School Librarianship

    ERIC Educational Resources Information Center

    Perez, Lisa

    2009-01-01

    In this article, the author discusses using Second Life (SL) in school librarianship. SL is a multi-user virtual environment in which persons create avatars to allow them to move and interact with other avatars. They can build and manipulate objects. To move, they can walk, run, fly, or teleport. There are many areas within SL to allow people to…

  13. Virtual Museum Learning

    ERIC Educational Resources Information Center

    Prosser, Dominic; Eddisford, Susan

    2004-01-01

    This paper examines children's and adults' attitudes to virtual representations of museum objects. Drawing on empirical research data gained from two web-based digital learning environments. The paper explores the characteristics of on-line learning activities that move children from a sense of wonder into meaningful engagement with objects and…

  14. Investigation of an EMI sensor for detection of large metallic objects in the presence of metallic clutter

    NASA Astrophysics Data System (ADS)

    Black, Christopher; McMichael, Ian; Riggs, Lloyd

    2005-06-01

    Electromagnetic induction (EMI) sensors and magnetometers have successfully detected surface laid, buried, and visually obscured metallic objects. Potential military activities could require detection of these objects at some distance from a moving vehicle in the presence of metallic clutter. Results show that existing EMI sensors have limited range capabilities and suffer from false alarms due to clutter. This paper presents results of an investigation of an EMI sensor designed for detecting large metallic objects on a moving platform in a high clutter environment. The sensor was developed by the U.S. Army RDECOM CERDEC NVESD in conjunction with the Johns Hopkins University Applied Physics Laboratory.

  15. Perceiving environmental structure from optical motion

    NASA Technical Reports Server (NTRS)

    Lappin, Joseph S.

    1991-01-01

    Generally speaking, one of the most important sources of optical information about environmental structure is known to be the deforming optical patterns produced by the movements of the observer (pilot) or environmental objects. As an observer moves through a rigid environment, the projected optical patterns of environmental objects are systematically transformed according to their orientations and positions in 3D space relative to those of the observer. The detailed characteristics of these deforming optical patterns carry information about the 3D structure of the objects and about their locations and orientations relative to those of the observer. The specific geometrical properties of moving images that may constitute visually detected information about the shapes and locations of environmental objects is examined.

  16. Neurally and ocularly informed graph-based models for searching 3D environments.

    PubMed

    Jangraw, David C; Wang, Jun; Lance, Brent J; Chang, Shih-Fu; Sajda, Paul

    2014-08-01

    As we move through an environment, we are constantly making assessments, judgments and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions-our implicit 'labeling' of the world. In this paper, we use physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3D environment. First, we record electroencephalographic (EEG), saccadic and pupillary data from subjects as they move through a small part of a 3D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest to them. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to the labeled ones. Finally, the system plots an efficient route to help the subjects visit the 'similar' objects it identifies. We show that by exploiting the subjects' implicit labeling to find objects of interest instead of exploring naively, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers' inference of subjects' implicit labeling. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user's interests.

  17. Influence of local objects on hippocampal representations: landmark vectors and memory

    PubMed Central

    Deshmukh, Sachin S.; Knierim, James J.

    2013-01-01

    The hippocampus is thought to represent nonspatial information in the context of spatial information. An animal can derive both spatial information as well as nonspatial information from the objects (landmarks) it encounters as it moves around in an environment. Here, we demonstrate correlates of both object-derived spatial as well as nonspatial information in the hippocampus of rats foraging in the presence of objects. We describe a new form of CA1 place cells, called landmark-vector cells, that encode spatial locations as a vector relationship to local landmarks. Such landmark vector relationships can be dynamically encoded. Of the 26 CA1 neurons that developed new fields in the course of a day’s recording sessions, in 8 cases the new fields were located at a similar distance and direction from a landmark as the initial field was located relative to a different landmark. We also demonstrate object-location memory in the hippocampus. When objects were removed from an environment or moved to new locations, a small number of neurons in CA1 and CA3 increased firing at the locations where the objects used to be. In some neurons, this increase occurred only in one location, indicating object +place conjunctive memory; in other neurons the increase in firing was seen at multiple locations where an object used to be. Taken together, these results demonstrate that the spatially restricted firing of hippocampal neurons encode multiple types of information regarding the relationship between an animal’s location and the location of objects in its environment. PMID:23447419

  18. Detecting multiple moving objects in crowded environments with coherent motion regions

    DOEpatents

    Cheriyadat, Anil M.; Radke, Richard J.

    2013-06-11

    Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.

  19. Free-floating dual-arm robots for space assembly

    NASA Technical Reports Server (NTRS)

    Agrawal, Sunil Kumar; Chen, M. Y.

    1994-01-01

    Freely moving systems in space conserve linear and angular momentum. As moving systems collide, the velocities get altered due to transfer of momentum. The development of strategies for assembly in a free-floating work environment requires a good understanding of primitives such as self motion of the robot, propulsion of the robot due to onboard thrusters, docking of the robot, retrieval of an object from a collection of objects, and release of an object in an object pool. The analytics of such assemblies involve not only kinematics and rigid body dynamics but also collision and impact dynamics of multibody systems. In an effort to understand such assemblies in zero gravity space environment, we are currently developing at Ohio University a free-floating assembly facility with a dual-arm planar robot equipped with thrusters, a free-floating material table, and a free-floating assembly table. The objective is to pick up workpieces from the material table and combine them into prespecified assemblies. This paper presents analytical models of assembly primitives and strategies for overall assembly. A computer simulation of an assembly is developed using the analytical models. The experiment facility will be used to verify the theoretical predictions.

  20. Kalman filter-based tracking of moving objects using linear ultrasonic sensor array for road vehicles

    NASA Astrophysics Data System (ADS)

    Li, Shengbo Eben; Li, Guofa; Yu, Jiaying; Liu, Chang; Cheng, Bo; Wang, Jianqiang; Li, Keqiang

    2018-01-01

    Detection and tracking of objects in the side-near-field has attracted much attention for the development of advanced driver assistance systems. This paper presents a cost-effective approach to track moving objects around vehicles using linearly arrayed ultrasonic sensors. To understand the detection characteristics of a single sensor, an empirical detection model was developed considering the shapes and surface materials of various detected objects. Eight sensors were arrayed linearly to expand the detection range for further application in traffic environment recognition. Two types of tracking algorithms, including an Extended Kalman filter (EKF) and an Unscented Kalman filter (UKF), for the sensor array were designed for dynamic object tracking. The ultrasonic sensor array was designed to have two types of fire sequences: mutual firing or serial firing. The effectiveness of the designed algorithms were verified in two typical driving scenarios: passing intersections with traffic sign poles or street lights, and overtaking another vehicle. Experimental results showed that both EKF and UKF had more precise tracking position and smaller RMSE (root mean square error) than a traditional triangular positioning method. The effectiveness also encourages the application of cost-effective ultrasonic sensors in the near-field environment perception in autonomous driving systems.

  1. Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion

    PubMed Central

    Fajen, Brett R.; Matthis, Jonathan S.

    2013-01-01

    Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983

  2. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.

    PubMed

    Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-08-23

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.

  3. Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor

    PubMed Central

    Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing

    2017-01-01

    Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520

  4. Modeling and Simulation Architecture for Studying Doppler-Based Radar with Complex Environments

    DTIC Science & Technology

    2009-03-26

    structures can interfere with a radar’s ability to detect moving aircraft because radar returns from turbines are comparable to those from slow flying...Netherlands Organisation for Applied Scientific Research . 13 EM Electromagnetic . . . . . . . . . . . . . . . . . . . . . . . 14 MTI Moving Target Indicator...ensure the turbine won’t interact with the radar. However, (2.3) doesn’t account for terrain masking or shadowing. If there is a tall object or terrain

  5. Characterizing and Supporting Change in Algebra Students' Representational Fluency in a CAS/Paper-and-Pencil Environment

    ERIC Educational Resources Information Center

    Fonger, Nicole L.

    2012-01-01

    Representational fluency (RF) includes an ability to interpret, create, move within and among, and connect tool-based representations of mathematical objects. Taken as an indicator of conceptual understanding, there is a need to better support school algebra students' RF in learning environments that utilize both computer algebra systems…

  6. Connecting Payments for Ecosystem Services and Agri-Environment Regulation: An Analysis of the Welsh Glastir Scheme

    ERIC Educational Resources Information Center

    Wynne-Jones, Sophie

    2013-01-01

    Policy debates in the European Union have increasingly emphasised "Payments for Ecosystem Services" (PES) as a model for delivering agri-environmental objectives. This paper examines the Glastir scheme, introduced in Wales in 2009, as a notable attempt to move between long standing models of European agri-environment regulation and…

  7. Object Detection Applied to Indoor Environments for Mobile Robot Navigation.

    PubMed

    Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón

    2016-07-28

    To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests.

  8. Object Detection Applied to Indoor Environments for Mobile Robot Navigation

    PubMed Central

    Hernández, Alejandra Carolina; Gómez, Clara; Crespo, Jonathan; Barber, Ramón

    2016-01-01

    To move around the environment, human beings depend on sight more than their other senses, because it provides information about the size, shape, color and position of an object. The increasing interest in building autonomous mobile systems makes the detection and recognition of objects in indoor environments a very important and challenging task. In this work, a vision system to detect objects considering usual human environments, able to work on a real mobile robot, is developed. In the proposed system, the classification method used is Support Vector Machine (SVM) and as input to this system, RGB and depth images are used. Different segmentation techniques have been applied to each kind of object. Similarly, two alternatives to extract features of the objects are explored, based on geometric shape descriptors and bag of words. The experimental results have demonstrated the usefulness of the system for the detection and location of the objects in indoor environments. Furthermore, through the comparison of two proposed methods for extracting features, it has been determined which alternative offers better performance. The final results have been obtained taking into account the proposed problem and that the environment has not been changed, that is to say, the environment has not been altered to perform the tests. PMID:27483264

  9. Seeing ahead: experience and language in spatial perspective.

    PubMed

    Alloway, Tracy Packiam; Corley, Martin; Ramscar, Michael

    2006-03-01

    Spatial perspective can be directed by various reference frames, as well as by the direction of motion. In the present study, we explored how ambiguity in spatial tasks can be resolved. Participants were presented with virtual reality environments in order to stimulate a spatialreference frame based on motion. They interacted with an ego-moving spatial system in Experiment 1 and an object-moving spatial system in Experiment 2. While interacting with the virtual environment, the participants were presented with either a question representing a motion system different from that of the virtual environment or a nonspatial question relating to physical features of the virtual environment. They then performed the target task assign the label front in an ambiguous spatial task. The findings indicate that the disambiguation of spatial terms can be influenced by embodied experiences, as represented by the virtual environment, as well as by linguistic context.

  10. Chosen Striking Location and the User-Tool-Environment System

    ERIC Educational Resources Information Center

    Wagman, Jeffrey B.; Taylor, Kona R.

    2004-01-01

    Controlling a hand-held tool requires that the tool user bring the tool into contact with an environmental surface in a task-appropriate manner. This, in turn, requires applying muscular forces so as to overcome how the object resists being moved about its various axes. Perceived properties of hand-held objects tend to be constrained by inertial…

  11. Your eyes give you away: pupillary responses, EEG dynamics, and applications for BCI (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Sajda, Paul

    2017-05-01

    As we move through an environment, we are constantly making assessments, judgments, and decisions about the things we encounter. Some are acted upon immediately, but many more become mental notes or fleeting impressions - our implicit "labeling" of the world. In this talk I will describe our work using physiological correlates of this labeling to construct a hybrid brain-computer interface (hBCI) system for efficient navigation of a 3-D environment. Specifically, we record electroencephalographic (EEG), saccadic, and pupillary data from subjects as they move through a small part of a 3-D virtual city under free-viewing conditions. Using machine learning, we integrate the neural and ocular signals evoked by the objects they encounter to infer which ones are of subjective interest. These inferred labels are propagated through a large computer vision graph of objects in the city, using semi-supervised learning to identify other, unseen objects that are visually similar to those that are labelled. Finally, the system plots an efficient route so that subjects visit similar objects of interest. We show that by exploiting the subjects' implicit labeling, the median search precision is increased from 25% to 97%, and the median subject need only travel 40% of the distance to see 84% of the objects of interest. We also find that the neural and ocular signals contribute in a complementary fashion to the classifiers' inference of subjects' implicit labeling. In summary, we show that neural and ocular signals reflecting subjective assessment of objects in a 3-D environment can be used to inform a graph-based learning model of that environment, resulting in an hBCI system that improves navigation and information delivery specific to the user's interests.

  12. Accuracy assessment of the Precise Point Positioning method applied for surveys and tracking moving objects in GIS environment

    NASA Astrophysics Data System (ADS)

    Ilieva, Tamara; Gekov, Svetoslav

    2017-04-01

    The Precise Point Positioning (PPP) method gives the users the opportunity to determine point locations using a single GNSS receiver. The accuracy of the determined by PPP point locations is better in comparison to the standard point positioning, due to the precise satellite orbit and clock corrections that are developed and maintained by the International GNSS Service (IGS). The aim of our current research is the accuracy assessment of the PPP method applied for surveys and tracking moving objects in GIS environment. The PPP data is collected by using preliminary developed by us software application that allows different sets of attribute data for the measurements and their accuracy to be used. The results from the PPP measurements are directly compared within the geospatial database to different other sets of terrestrial data - measurements obtained by total stations, real time kinematic and static GNSS.

  13. A new urban planning code's impact on walking: the residential environments project.

    PubMed

    Christian, Hayley; Knuiman, Matthew; Bull, Fiona; Timperio, Anna; Foster, Sarah; Divitini, Mark; Middleton, Nicholas; Giles-Corti, Billie

    2013-07-01

    We examined whether people moving into a housing development designed according to a state government livable neighborhoods subdivision code engage in more walking than do people who move to other types of developments. In a natural experiment of 1813 people building homes in 73 new housing developments in Perth, Western Australia, we surveyed participants before and then 12 and 36 months after moving. We measured self-reported walking using the Neighborhood Physical Activity Questionnaire and collected perceptions of the environment and self-selection factors. We calculated objective measures of the built environment using a Geographic Information System. After relocation, participants in livable versus conventional developments had greater street connectivity, residential density, land use mix, and access to destinations and more positive perceptions of their neighborhood (all P < .05). However, there were no significant differences in walking over time by type of development (P > .05). Implementation of the Livable Neighborhoods Guidelines produced more supportive environments; however, the level of intervention was insufficient to encourage more walking. Evaluations of new urban planning policies need to incorporate longer term follow-up to allow time for new neighborhoods to develop.

  14. Embodied affectivity: on moving and being moved

    PubMed Central

    Fuchs, Thomas; Koch, Sabine C.

    2014-01-01

    There is a growing body of research indicating that bodily sensation and behavior strongly influences one's emotional reaction toward certain situations or objects. On this background, a framework model of embodied affectivity1 is suggested: we regard emotions as resulting from the circular interaction between affective qualities or affordances in the environment and the subject's bodily resonance, be it in the form of sensations, postures, expressive movements or movement tendencies. Motion and emotion are thus intrinsically connected: one is moved by movement (perception; impression; affection2) and moved to move (action; expression; e-motion). Through its resonance, the body functions as a medium of emotional perception: it colors or charges self-experience and the environment with affective valences while it remains itself in the background of one's own awareness. This model is then applied to emotional social understanding or interaffectivity which is regarded as an intertwinement of two cycles of embodied affectivity, thus continuously modifying each partner's affective affordances and bodily resonance. We conclude with considerations of how embodied affectivity is altered in psychopathology and can be addressed in psychotherapy of the embodied self. PMID:24936191

  15. Simultaneous 3D-vibration measurement using a single laser beam device

    NASA Astrophysics Data System (ADS)

    Brecher, Christian; Guralnik, Alexander; Baümler, Stephan

    2012-06-01

    Today's commercial solutions for vibration measurement and modal analysis are 3D-scanning laser doppler vibrometers, mainly used for open surfaces in the automotive and aerospace industries and the classic three-axial accelerometers in civil engineering, for most industrial applications in manufacturing environments, and particularly for partially closed structures. This paper presents a novel measurement approach using a single laser beam device and optical reflectors to simultaneously perform 3D-dynamic measurement as well as geometry measurement of the investigated object. We show the application of this so called laser tracker for modal testing of structures on a mechanical manufacturing shop floor. A holistic measurement method is developed containing manual reflector placement, semi-automated geometric modeling of investigated objects and fully automated vibration measurement up to 1000 Hz and down to few microns amplitude. Additionally the fast set up dynamic measurement of moving objects using a tracking technique is presented that only uses the device's own functionalities and does neither require a predefined moving path of the target nor an electronic synchronization to the moving object.

  16. Gaze control for an active camera system by modeling human pursuit eye movements

    NASA Astrophysics Data System (ADS)

    Toelg, Sebastian

    1992-11-01

    The ability to stabilize the image of one moving object in the presence of others by active movements of the visual sensor is an essential task for biological systems, as well as for autonomous mobile robots. An algorithm is presented that evaluates the necessary movements from acquired visual data and controls an active camera system (ACS) in a feedback loop. No a priori assumptions about the visual scene and objects are needed. The algorithm is based on functional models of human pursuit eye movements and is to a large extent influenced by structural principles of neural information processing. An intrinsic object definition based on the homogeneity of the optical flow field of relevant objects, i.e., moving mainly fronto- parallel, is used. Velocity and spatial information are processed in separate pathways, resulting in either smooth or saccadic sensor movements. The program generates a dynamic shape model of the moving object and focuses its attention to regions where the object is expected. The system proved to behave in a stable manner under real-time conditions in complex natural environments and manages general object motion. In addition it exhibits several interesting abilities well-known from psychophysics like: catch-up saccades, grouping due to coherent motion, and optokinetic nystagmus.

  17. Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features

    NASA Astrophysics Data System (ADS)

    Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique

    2011-12-01

    We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.

  18. Lidar-based door and stair detection from a mobile robot

    NASA Astrophysics Data System (ADS)

    Bansal, Mayank; Southall, Ben; Matei, Bogdan; Eledath, Jayan; Sawhney, Harpreet

    2010-04-01

    We present an on-the-move LIDAR-based object detection system for autonomous and semi-autonomous unmanned vehicle systems. In this paper we make several contributions: (i) we describe an algorithm for real-time detection of objects such as doors and stairs in indoor environments; (ii) we describe efficient data structures and algorithms for processing 3D point clouds acquired by laser scanners in a streaming manner, which minimize the memory copying and access. We show qualitative results demonstrating the effectiveness of our approach on runs in an indoor office environment.

  19. Adaptive Oceanographic Sampling in a Coastal Environment Using Autonomous Gliding Vehicles

    DTIC Science & Technology

    2003-08-01

    cost autonomous vehicles with near-global range and modular sensor payload. Particular emphasis is placed on the development of adaptive sampling...environment. Secondary objectives include continued development of adaptive sampling strategies suitable for large fleets of slow-moving autonomous ... vehicles , and development and implementation of new oceanographic sensors and sampling methodologies. The main task completed was a complete redesign of

  20. Ethics and Environment: Topics for Enquiry and Discussion by Older Children.

    ERIC Educational Resources Information Center

    Schools Council, London (England).

    The objective of this set of investigations is to start with the problems that beset us in our use of the environment and, through these, to move towards a better understanding of the principles which regulate life on this planet. The packs, or units, are concerned with a number of topics of environmental importance, each of which deals with some…

  1. Evidence-Based Design Features Improve Sleep Quality Among Psychiatric Inpatients.

    PubMed

    Pyrke, Ryan J L; McKinnon, Margaret C; McNeely, Heather E; Ahern, Catherine; Langstaff, Karen L; Bieling, Peter J

    2017-10-01

    The primary aim of the present study was to compare sleep characteristics pre- and post-move into a state-of-the-art mental health facility, which offered private sleeping quarters. Significant evidence points toward sleep disruption among psychiatric inpatients. It is unclear, however, how environmental factors (e.g., dorm-style rooms) impact sleep quality in this population. To assess sleep quality, a novel objective technology, actigraphy, was used before and after a facility move. Subjective daily interviews were also administered, along with the Horne-Ostberg Morningness-Eveningness Questionnaire and the Pittsburgh Sleep Quality Index. Actigraphy revealed significant improvements in objective sleep quality following the facility move. Interestingly, subjective report of sleep quality did not correlate with the objective measures. Circadian sleep type appeared to play a role in influencing subjective attitudes toward sleep quality. Built environment has a significant effect on the sleep quality of psychiatric inpatients. Given well-documented disruptions in sleep quality present among psychiatric patients undergoing hospitalization, design elements like single patient bedrooms are highly desirable.

  2. Object tracking via background subtraction for monitoring illegal activity in crossroad

    NASA Astrophysics Data System (ADS)

    Ghimire, Deepak; Jeong, Sunghwan; Park, Sang Hyun; Lee, Joonwhoan

    2016-07-01

    In the field of intelligent transportation system a great number of vision-based techniques have been proposed to prevent pedestrians from being hit by vehicles. This paper presents a system that can perform pedestrian and vehicle detection and monitoring of illegal activity in zebra crossings. In zebra crossing, according to the traffic light status, to fully avoid a collision, a driver or pedestrian should be warned earlier if they possess any illegal moves. In this research, at first, we detect the traffic light status of pedestrian and monitor the crossroad for vehicle pedestrian moves. The background subtraction based object detection and tracking is performed to detect pedestrian and vehicles in crossroads. Shadow removal, blob segmentation, trajectory analysis etc. are used to improve the object detection and classification performance. We demonstrate the experiment in several video sequences which are recorded in different time and environment such as day time and night time, sunny and raining environment. Our experimental results show that such simple and efficient technique can be used successfully as a traffic surveillance system to prevent accidents in zebra crossings.

  3. Peripheral Visual Cues Contribute to the Perception of Object Movement During Self-Movement

    PubMed Central

    Rogers, Cassandra; Warren, Paul A.

    2017-01-01

    Safe movement through the environment requires us to monitor our surroundings for moving objects or people. However, identification of moving objects in the scene is complicated by self-movement, which adds motion across the retina. To identify world-relative object movement, the brain thus has to ‘compensate for’ or ‘parse out’ the components of retinal motion that are due to self-movement. We have previously demonstrated that retinal cues arising from central vision contribute to solving this problem. Here, we investigate the contribution of peripheral vision, commonly thought to provide strong cues to self-movement. Stationary participants viewed a large field of view display, with radial flow patterns presented in the periphery, and judged the trajectory of a centrally presented probe. Across two experiments, we demonstrate and quantify the contribution of peripheral optic flow to flow parsing during forward and backward movement. PMID:29201335

  4. Grasping rigid objects in zero-g

    NASA Astrophysics Data System (ADS)

    Anderson, Greg D.

    1993-12-01

    The extra vehicular activity helper/retriever (EVAHR) is a prototype for an autonomous free- flying robotic astronaut helper. The ability to grasp a moving object is a fundamental skill required for any autonomous free-flyer. This paper discusses an algorithm that couples resolved acceleration control with potential field based obstacle avoidance to enable a manipulator to track and capture a rigid object in (imperfect) zero-g while avoiding joint limits, singular configurations, and unintentional impacts between the manipulator and the environment.

  5. Weighted feature selection criteria for visual servoing of a telerobot

    NASA Technical Reports Server (NTRS)

    Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.

    1989-01-01

    Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.

  6. The perception of geometrical structure from congruence

    NASA Technical Reports Server (NTRS)

    Lappin, Joseph S.; Wason, Thomas D.

    1989-01-01

    The principle function of vision is to measure the environment. As demonstrated by the coordination of motor actions with the positions and trajectories of moving objects in cluttered environments and by rapid recognition of solid objects in varying contexts from changing perspectives, vision provides real-time information about the geometrical structure and location of environmental objects and events. The geometric information provided by 2-D spatial displays is examined. It is proposed that the geometry of this information is best understood not within the traditional framework of perspective trigonometry, but in terms of the structure of qualitative relations defined by congruences among intrinsic geometric relations in images of surfaces. The basic concepts of this geometrical theory are outlined.

  7. Depth information in natural environments derived from optic flow by insect motion detection system: a model analysis

    PubMed Central

    Schwegmann, Alexander; Lindemann, Jens P.; Egelhaaf, Martin

    2014-01-01

    Knowing the depth structure of the environment is crucial for moving animals in many behavioral contexts, such as collision avoidance, targeting objects, or spatial navigation. An important source of depth information is motion parallax. This powerful cue is generated on the eyes during translatory self-motion with the retinal images of nearby objects moving faster than those of distant ones. To investigate how the visual motion pathway represents motion-based depth information we analyzed its responses to image sequences recorded in natural cluttered environments with a wide range of depth structures. The analysis was done on the basis of an experimentally validated model of the visual motion pathway of insects, with its core elements being correlation-type elementary motion detectors (EMDs). It is the key result of our analysis that the absolute EMD responses, i.e., the motion energy profile, represent the contrast-weighted nearness of environmental structures during translatory self-motion at a roughly constant velocity. In other words, the output of the EMD array highlights contours of nearby objects. This conclusion is largely independent of the scale over which EMDs are spatially pooled and was corroborated by scrutinizing the motion energy profile after eliminating the depth structure from the natural image sequences. Hence, the well-established dependence of correlation-type EMDs on both velocity and textural properties of motion stimuli appears to be advantageous for representing behaviorally relevant information about the environment in a computationally parsimonious way. PMID:25136314

  8. Execution of saccadic eye movements affects speed perception

    PubMed Central

    Goettker, Alexander; Braun, Doris I.; Schütz, Alexander C.; Gegenfurtner, Karl R.

    2018-01-01

    Due to the foveal organization of our visual system we have to constantly move our eyes to gain precise information about our environment. Doing so massively alters the retinal input. This is problematic for the perception of moving objects, because physical motion and retinal motion become decoupled and the brain has to discount the eye movements to recover the speed of moving objects. Two different types of eye movements, pursuit and saccades, are combined for tracking. We investigated how the way we track moving targets can affect the perceived target speed. We found that the execution of corrective saccades during pursuit initiation modifies how fast the target is perceived compared with pure pursuit. When participants executed a forward (catch-up) saccade they perceived the target to be moving faster. When they executed a backward saccade they perceived the target to be moving more slowly. Variations in pursuit velocity without corrective saccades did not affect perceptual judgments. We present a model for these effects, assuming that the eye velocity signal for small corrective saccades gets integrated with the retinal velocity signal during pursuit. In our model, the execution of corrective saccades modulates the integration of these two signals by giving less weight to the retinal information around the time of corrective saccades. PMID:29440494

  9. Crawling and walking infants encounter objects differently in a multi-target environment.

    PubMed

    Dosso, Jill A; Boudreau, J Paul

    2014-10-01

    From birth, infants move their bodies in order to obtain information and stimulation from their environment. Exploratory movements are important for the development of an infant's understanding of the world and are well established as being key to cognitive advances. Newly acquired motor skills increase the potential actions available to the infant. However, the way that infants employ potential actions in environments with multiple potential targets is undescribed. The current work investigated the target object selections of infants across a range of self-produced locomotor experience (11- to 14-month-old crawlers and walkers). Infants repeatedly accessed objects among pairs of objects differing in both distance and preference status, some requiring locomotion. Overall, their object actions were found to be sensitive to object preference status; however, the role of object distance in shaping object encounters was moderated by movement status. Crawlers' actions appeared opportunistic and were biased towards nearby objects while walkers' actions appeared intentional and were independent of object position. Moreover, walkers' movements favoured preferred objects more strongly for children with higher levels of self-produced locomotion experience. The multi-target experimental situation used in this work parallels conditions faced by foraging organisms, and infants' behaviours were discussed with respect to optimal foraging theory. There is a complex interplay between infants' agency, locomotor experience, and environment in shaping their motor actions. Infants' movements, in turn, determine the information and experiences offered to infants by their micro-environment.

  10. Swing-free transport of suspended loads. Summer research report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basher, A.M.H.

    1996-02-01

    Transportation of large objects using traditional bridge crane can induce pendulum motion (swing) of the object. In environments such as factory the energy contained in the swinging mass can be large and therefore attempts to move the mass onto target while still swinging can cause considerable damage. Oscillations must be damped or allowed to decay before the next process can take place. Stopping the swing can be accomplished by moving the bridge in a manner to counteract the swing which sometimes can be done by skilled operator, or by waiting for the swing to damp sufficiently that the object canmore » be moved to the target without risk of damage. One of the methods that can be utilized for oscillation suppression is input preshaping. The validity of this method depends on the exact knowledge of the system dynamics. This method can be modified to provide some degrees of robustness with respect to unknown dynamics but at the cost of the speed of transient response. This report describes investigations on the development of a controller to dampen the oscillations.« less

  11. Presentation of a large amount of moving objects in a virtual environment

    NASA Astrophysics Data System (ADS)

    Ye, Huanzhuo; Gong, Jianya; Ye, Jing

    2004-05-01

    It needs a lot of consideration to manage the presentation of a large amount of moving objects in virtual environment. Motion state model (MSM) is used to represent the motion of objects and 2n tree is used to index the motion data stored in database or files. To minimize the necessary memory occupation for static models, cache with LRU or FIFO refreshing is introduced. DCT and wavelet work well with different playback speeds of motion presentation because they can filter low frequencies from motion data and adjust the filter according to playback speed. Since large amount of data are continuously retrieved, calculated, used for displaying, and then discarded, multithreading technology is naturally employed though single thread with carefully arranged data retrieval also works well when the number of objects is not very big. With multithreading, the level of concurrence should be placed at data retrieval, where waiting may occur, rather than at calculating or displaying, and synchronization should be carefully arranged to make sure that different threads can collaborate well. Collision detection is not needed when playing with history data and sampled current data; however, it is necessary for spatial state prediction. When the current state is presented, either predicting-adjusting method or late updating method could be used according to the users' preference.

  12. Transient inactivation of the anterior cingulate cortex in rats disrupts avoidance of a dynamic object.

    PubMed

    Svoboda, Jan; Lobellová, Veronika; Popelíková, Anna; Ahuja, Nikhil; Kelemen, Eduard; Stuchlík, Aleš

    2017-03-01

    Although animals often learn and monitor the spatial properties of relevant moving objects such as conspecifics and predators to properly organize their own spatial behavior, the underlying brain substrate has received little attention and hence remains elusive. Because the anterior cingulate cortex (ACC) participates in conflict monitoring and effort-based decision making, and ACC neurons respond to objects in the environment, it may also play a role in the monitoring of moving cues and exerting the appropriate spatial response. We used a robot avoidance task in which a rat had to maintain at least a 25cm distance from a small programmable robot to avoid a foot shock. In successive sessions, we trained ten Long Evans male rats to avoid a fast-moving robot (4cm/s), a stationary robot, and a slow-moving robot (1cm/s). In each condition, the ACC was transiently inactivated by bilateral injections of muscimol in the penultimate session and a control saline injection was given in the last session. Compared to the corresponding saline session, ACC-inactivated rats received more shocks when tested in the fast-moving condition, but not in the stationary or slow robot conditions. Furthermore, ACC-inactivated rats less frequently responded to an approaching robot with appropriate escape responses although their response to shock stimuli remained preserved. Since we observed no effect on slow or stationary robot avoidance, we conclude that the ACC may exert cognitive efforts for monitoring dynamic updating of the position of an object, a role complementary to the dorsal hippocampus. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Using advanced computer vision algorithms on small mobile robots

    NASA Astrophysics Data System (ADS)

    Kogut, G.; Birchmore, F.; Biagtan Pacis, E.; Everett, H. R.

    2006-05-01

    The Technology Transfer project employs a spiral development process to enhance the functionality and autonomy of mobile robot systems in the Joint Robotics Program (JRP) Robotic Systems Pool by converging existing component technologies onto a transition platform for optimization. An example of this approach is the implementation of advanced computer vision algorithms on small mobile robots. We demonstrate the implementation and testing of the following two algorithms useful on mobile robots: 1) object classification using a boosted Cascade of classifiers trained with the Adaboost training algorithm, and 2) human presence detection from a moving platform. Object classification is performed with an Adaboost training system developed at the University of California, San Diego (UCSD) Computer Vision Lab. This classification algorithm has been used to successfully detect the license plates of automobiles in motion in real-time. While working towards a solution to increase the robustness of this system to perform generic object recognition, this paper demonstrates an extension to this application by detecting soda cans in a cluttered indoor environment. The human presence detection from a moving platform system uses a data fusion algorithm which combines results from a scanning laser and a thermal imager. The system is able to detect the presence of humans while both the humans and the robot are moving simultaneously. In both systems, the two aforementioned algorithms were implemented on embedded hardware and optimized for use in real-time. Test results are shown for a variety of environments.

  14. Integrating Omic Technologies into Aquatic Ecological Risk Assessment and Environmental Monitoring: Hurdles, Achievements and Future Outlook

    EPA Science Inventory

    In this commentary we present the findings from an international consortium on fish toxicogenomics sponsored by the UK Natural Environment Research Council (NERC) with an objective of moving omic technologies into chemical risk assessment and environmental monitoring. Objectiv...

  15. Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions

    NASA Astrophysics Data System (ADS)

    Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.

    2005-03-01

    The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.

  16. Design, characterization, and control of the NASA three degree of freedom reaction compensation platform

    NASA Technical Reports Server (NTRS)

    Birkhimer, Craig; Newman, Wyatt; Choi, Benjamin; Lawrence, Charles

    1994-01-01

    Increasing research is being done into industrial uses for the microgravity environment aboard orbiting space vehicles. However, there is some concern over the effects of reaction forces produced by moving objects, especially motors, robotic actuators, and astronauts. Reaction forces produced by the movement of these objects may manifest themselves as undesirable accelerations in the space vehicle making the vehicle unusable for microgravity applications. It is desirable to provide compensation for such forces using active means. This paper presents the design and experimental evaluation of the NASA three degree of freedom reaction compensation platform, a system designed to be a testbed for the feasibility of active attenuation of reaction forces caused by moving objects in a microgravity environment. Unique 'linear motors,' which convert electrical current directly into rectilinear force, are used in the platform design. The linear motors induce accelerations of the displacer inertias. These accelerations create reaction forces that may be controlled to counteract disturbance forces introduced to the platform. The stated project goal is to reduce reaction forces by 90 percent, or -20 dB. Description of the system hardware, characterization of the actuators and the composite system, and design of the software safety system and control software are included.

  17. Replacement-ready? Succession planning tops health care administrators' priorities.

    PubMed

    Husting, P M; Alderman, M

    2001-09-01

    Nurses' increasing age coupled with health care's rapidly changing environment moves succession planning, originally only a business sector tool, to a top administrative priority. Through active support of your facility's executive leadership and a clear linkage to long range organization objectives, you can implement this progressive procedure.

  18. Sensory Agreement Guides Kinetic Energy Optimization of Arm Movements during Object Manipulation.

    PubMed

    Farshchiansadegh, Ali; Melendez-Calderon, Alejandro; Ranganathan, Rajiv; Murphey, Todd D; Mussa-Ivaldi, Ferdinando A

    2016-04-01

    The laws of physics establish the energetic efficiency of our movements. In some cases, like locomotion, the mechanics of the body dominate in determining the energetically optimal course of action. In other tasks, such as manipulation, energetic costs depend critically upon the variable properties of objects in the environment. Can the brain identify and follow energy-optimal motions when these motions require moving along unfamiliar trajectories? What feedback information is required for such optimal behavior to occur? To answer these questions, we asked participants to move their dominant hand between different positions while holding a virtual mechanical system with complex dynamics (a planar double pendulum). In this task, trajectories of minimum kinetic energy were along curvilinear paths. Our findings demonstrate that participants were capable of finding the energy-optimal paths, but only when provided with veridical visual and haptic information pertaining to the object, lacking which the trajectories were executed along rectilinear paths.

  19. A comparison of moving object detection methods for real-time moving object detection

    NASA Astrophysics Data System (ADS)

    Roshan, Aditya; Zhang, Yun

    2014-06-01

    Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.

  20. Change detection in urban and rural driving scenes: Effects of target type and safety relevance on change blindness.

    PubMed

    Beanland, Vanessa; Filtness, Ashleigh J; Jeans, Rhiannon

    2017-03-01

    The ability to detect changes is crucial for safe driving. Previous research has demonstrated that drivers often experience change blindness, which refers to failed or delayed change detection. The current study explored how susceptibility to change blindness varies as a function of the driving environment, type of object changed, and safety relevance of the change. Twenty-six fully-licenced drivers completed a driving-related change detection task. Changes occurred to seven target objects (road signs, cars, motorcycles, traffic lights, pedestrians, animals, or roadside trees) across two environments (urban or rural). The contextual safety relevance of the change was systematically manipulated within each object category, ranging from high safety relevance (i.e., requiring a response by the driver) to low safety relevance (i.e., requiring no response). When viewing rural scenes, compared with urban scenes, participants were significantly faster and more accurate at detecting changes, and were less susceptible to "looked-but-failed-to-see" errors. Interestingly, safety relevance of the change differentially affected performance in urban and rural environments. In urban scenes, participants were more efficient at detecting changes with higher safety relevance, whereas in rural scenes the effect of safety relevance has marginal to no effect on change detection. Finally, even after accounting for safety relevance, change blindness varied significantly between target types. Overall the results suggest that drivers are less susceptible to change blindness for objects that are likely to change or move (e.g., traffic lights vs. road signs), and for moving objects that pose greater danger (e.g., wild animals vs. pedestrians). Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. A biological hierarchical model based underwater moving object detection.

    PubMed

    Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen

    2014-01-01

    Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results.

  2. A Biological Hierarchical Model Based Underwater Moving Object Detection

    PubMed Central

    Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen

    2014-01-01

    Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results. PMID:25140194

  3. Real-time moving objects detection and tracking from airborne infrared camera

    NASA Astrophysics Data System (ADS)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2017-10-01

    Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its capability to work in real-time.

  4. An autonomous robot inspired by insect neurophysiology pursues moving features in natural environments

    NASA Astrophysics Data System (ADS)

    Bagheri, Zahra M.; Cazzolato, Benjamin S.; Grainger, Steven; O'Carroll, David C.; Wiederman, Steven D.

    2017-08-01

    Objective. Many computer vision and robotic applications require the implementation of robust and efficient target-tracking algorithms on a moving platform. However, deployment of a real-time system is challenging, even with the computational power of modern hardware. Lightweight and low-powered flying insects, such as dragonflies, track prey or conspecifics within cluttered natural environments, illustrating an efficient biological solution to the target-tracking problem. Approach. We used our recent recordings from ‘small target motion detector’ neurons in the dragonfly brain to inspire the development of a closed-loop target detection and tracking algorithm. This model exploits facilitation, a slow build-up of response to targets which move along long, continuous trajectories, as seen in our electrophysiological data. To test performance in real-world conditions, we implemented this model on a robotic platform that uses active pursuit strategies based on insect behaviour. Main results. Our robot performs robustly in closed-loop pursuit of targets, despite a range of challenging conditions used in our experiments; low contrast targets, heavily cluttered environments and the presence of distracters. We show that the facilitation stage boosts responses to targets moving along continuous trajectories, improving contrast sensitivity and detection of small moving targets against textured backgrounds. Moreover, the temporal properties of facilitation play a useful role in handling vibration of the robotic platform. We also show that the adoption of feed-forward models which predict the sensory consequences of self-movement can significantly improve target detection during saccadic movements. Significance. Our results provide insight into the neuronal mechanisms that underlie biological target detection and selection (from a moving platform), as well as highlight the effectiveness of our bio-inspired algorithm in an artificial visual system.

  5. Cable applications in robot compliant devices

    NASA Technical Reports Server (NTRS)

    Kerley, James J.

    1987-01-01

    Robotic systems need compliance to connect the robot to the work object. The cable system illustrated offers compliance for mating but can be changed in space to become quite stiff. Thus the same system can do both tasks, even in environments where the work object or robot are moving at different frequencies and different amplitudes. The adjustment can be made in all six degrees of freedom, translated in or rotated in any plane and still make a good contact and control.

  6. Activity and function recognition for moving and static objects in urban environments from wide-area persistent surveillance inputs

    NASA Astrophysics Data System (ADS)

    Levchuk, Georgiy; Bobick, Aaron; Jones, Eric

    2010-04-01

    In this paper, we describe results from experimental analysis of a model designed to recognize activities and functions of moving and static objects from low-resolution wide-area video inputs. Our model is based on representing the activities and functions using three variables: (i) time; (ii) space; and (iii) structures. The activity and function recognition is achieved by imposing lexical, syntactic, and semantic constraints on the lower-level event sequences. In the reported research, we have evaluated the utility and sensitivity of several algorithms derived from natural language processing and pattern recognition domains. We achieved high recognition accuracy for a wide range of activity and function types in the experiments using Electro-Optical (EO) imagery collected by Wide Area Airborne Surveillance (WAAS) platform.

  7. Building Competency-Based Pathways: Successes and Challenges from Leaders in the Field. A Forum

    ERIC Educational Resources Information Center

    American Youth Policy Forum, 2011

    2011-01-01

    This forum provided an overview of competency-based pathways to education and described programs that have successfully utilized these pathways to move all students to success in high school and beyond. Speakers highlighted how innovative learning environments that base student advancement upon mastery of measurable learning objectives have been…

  8. Two applications of time reversal mirrors: seismic radio and seismic radar.

    PubMed

    Hanafy, Sherif M; Schuster, Gerard T

    2011-10-01

    Two seismic applications of time reversal mirrors (TRMs) are introduced and tested with field experiments. The first one is sending, receiving, and decoding coded messages similar to a radio except seismic waves are used. The second one is, similar to radar surveillance, detecting and tracking a moving object(s) in a remote area, including the determination of the objects speed of movement. Both applications require the prior recording of calibration Green's functions in the area of interest. This reference Green's function will be used as a codebook to decrypt the coded message in the first application and as a moving sensor for the second application. Field tests show that seismic radar can detect the moving coordinates (x(t), y(t), z(t)) of a person running through a calibration site. This information also allows for a calculation of his velocity as a function of location. Results with the seismic radio are successful in seismically detecting and decoding coded pulses produced by a hammer. Both seismic radio and radar are highly robust to signals in high noise environments due to the super-stacking property of TRMs. © 2011 Acoustical Society of America

  9. Distribution majorization of corner points by reinforcement learning for moving object detection

    NASA Astrophysics Data System (ADS)

    Wu, Hao; Yu, Hao; Zhou, Dongxiang; Cheng, Yongqiang

    2018-04-01

    Corner points play an important role in moving object detection, especially in the case of free-moving camera. Corner points provide more accurate information than other pixels and reduce the computation which is unnecessary. Previous works only use intensity information to locate the corner points, however, the information that former and the last frames provided also can be used. We utilize the information to focus on more valuable area and ignore the invaluable area. The proposed algorithm is based on reinforcement learning, which regards the detection of corner points as a Markov process. In the Markov model, the video to be detected is regarded as environment, the selections of blocks for one corner point are regarded as actions and the performance of detection is regarded as state. Corner points are assigned to be the blocks which are seperated from original whole image. Experimentally, we select a conventional method which uses marching and Random Sample Consensus algorithm to obtain objects as the main framework and utilize our algorithm to improve the result. The comparison between the conventional method and the same one with our algorithm show that our algorithm reduce 70% of the false detection.

  10. A New Continent of Ideas

    NASA Technical Reports Server (NTRS)

    1990-01-01

    While a new technology called 'virtual reality' is still at the 'ground floor' level, one of its basic components, 3D computer graphics is already in wide commercial use and expanding. Other components that permit a human operator to 'virtually' explore an artificial environment and to interact with it are being demonstrated routinely at Ames and elsewhere. Virtual reality might be defined as an environment capable of being virtually entered - telepresence, it is called - or interacted with by a human. The Virtual Interface Environment Workstation (VIEW) is a head-mounted stereoscopic display system in which the display may be an artificial computer-generated environment or a real environment relayed from remote video cameras. Operator can 'step into' this environment and interact with it. The DataGlove has a series of fiber optic cables and sensors that detect any movement of the wearer's fingers and transmit the information to a host computer; a computer generated image of the hand will move exactly as the operator is moving his gloved hand. With appropriate software, the operator can use the glove to interact with the computer scene by grasping an object. The DataSuit is a sensor equipped full body garment that greatly increases the sphere of performance for virtual reality simulations.

  11. Using a CO2 laser for PIR-detector spoofing

    NASA Astrophysics Data System (ADS)

    Schleijpen, Ric H. M. A.; van Putten, Frank J. M.

    2016-10-01

    This paper presents experimental work on the use of a CO2 laser for triggering of PIR sensors. Pyro-electric InfraRed sensors are often used as motion detectors for detection of moving persons or objects that are warmer than their environment. Apart from uses in the civilian domain, also applications in improvised weapons have been encountered. In such applications the PIR sensor triggers a weapon, when moving persons or vehicles are detected. A CO2 laser can be used to project a moving heat spot in front of the PIR, generating the same triggering effect as a real moving object. The goal of the research was to provide a basis for assessing the feasibility of the use of a CO2 laser as a countermeasure against PIR sensors. After a general introduction of the PIR sensing principle a theoretical and experimental analysis of the required power levels will be presented. Based on this quantitative analysis, a set up for indoor experiments to trigger the PIR devices remotely with a CO2 laser was prepared. Finally some selected results of the experiments will be presented. Implications for the use as a countermeasure will be discussed.

  12. A New Moving Object Detection Method Based on Frame-difference and Background Subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong

    2017-09-01

    Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.

  13. Integration of the virtual model of a Stewart platform with the avatar of a vehicle in a virtual reality

    NASA Astrophysics Data System (ADS)

    Herbuś, K.; Ociepka, P.

    2016-08-01

    The development of methods of computer aided design and engineering allows conducting virtual tests, among others concerning motion simulation of technical means. The paper presents a method of integrating an object in the form of a virtual model of a Stewart platform with an avatar of a vehicle moving in a virtual environment. The area of the problem includes issues related to the problem of fidelity of mapping the work of the analyzed technical mean. The main object of investigations is a 3D model of a Stewart platform, which is a subsystem of the simulator designated for driving learning for disabled persons. The analyzed model of the platform, prepared for motion simulation, was created in the “Motion Simulation” module of a CAD/CAE class system Siemens PLM NX. Whereas the virtual environment, in which the moves the avatar of the passenger car, was elaborated in a VR class system EON Studio. The element integrating both of the mentioned software environments is a developed application that reads information from the virtual reality (VR) concerning the current position of the car avatar. Then, basing on the accepted algorithm, it sends control signals to respective joints of the model of the Stewart platform (CAD).

  14. Vision Algorithms to Determine Shape and Distance for Manipulation of Unmodeled Objects

    NASA Technical Reports Server (NTRS)

    Montes, Leticia; Bowers, David; Lumia, Ron

    1998-01-01

    This paper discusses the development of a robotic system for general use in an unstructured environment. This is illustrated through pick and place of randomly positioned, un-modeled objects. There are many applications for this project, including rock collection for the Mars Surveyor Program. This system is demonstrated with a Puma560 robot, Barrett hand, Cognex vision system, and Cimetrix simulation and control, all running on a PC. The demonstration consists of two processes: vision system and robotics. The vision system determines the size and location of the unknown objects. The robotics part consists of moving the robot to the object, configuring the hand based on the information from the vision system, then performing the pick/place operation. This work enhances and is a part of the Low Cost Virtual Collaborative Environment which provides remote simulation and control of equipment.

  15. Multiple targets detection method in detection of UWB through-wall radar

    NASA Astrophysics Data System (ADS)

    Yang, Xiuwei; Yang, Chuanfa; Zhao, Xingwen; Tian, Xianzhong

    2017-11-01

    In this paper, the problems and difficulties encountered in the detection of multiple moving targets by UWB radar are analyzed. The experimental environment and the penetrating radar system are established. An adaptive threshold method based on local area is proposed to effectively filter out clutter interference The objective of the moving target is analyzed, and the false target is further filtered out by extracting the target feature. Based on the correlation between the targets, the target matching algorithm is proposed to improve the detection accuracy. Finally, the effectiveness of the above method is verified by practical experiment.

  16. Shape and Color Features for Object Recognition Search

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Duong, Vu A.; Stubberud, Allen R.

    2012-01-01

    A bio-inspired shape feature of an object of interest emulates the integration of the saccadic eye movement and horizontal layer in vertebrate retina for object recognition search where a single object can be used one at a time. The optimal computational model for shape-extraction-based principal component analysis (PCA) was also developed to reduce processing time and enable the real-time adaptive system capability. A color feature of the object is employed as color segmentation to empower the shape feature recognition to solve the object recognition in the heterogeneous environment where a single technique - shape or color - may expose its difficulties. To enable the effective system, an adaptive architecture and autonomous mechanism were developed to recognize and adapt the shape and color feature of the moving object. The bio-inspired object recognition based on bio-inspired shape and color can be effective to recognize a person of interest in the heterogeneous environment where the single technique exposed its difficulties to perform effective recognition. Moreover, this work also demonstrates the mechanism and architecture of the autonomous adaptive system to enable the realistic system for the practical use in the future.

  17. Personal cooling apparatus and method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siman-Tov, Moshe; Crabtree, Jerry Allen

    2001-01-01

    A portable lightweight cooling apparatus for cooling a human body is disclosed, having a channeled sheet which absorbs sweat and/or evaporative liquid, a layer of highly conductive fibers adjacent the channeled sheet; and, an air-moving device for moving air through the channeled sheet, wherein the layer of fibers redistributes heat uniformly across the object being cooled, while the air moving within the channeled sheet evaporates sweat and/or other evaporative liquid, absorbs evaporated moisture and the uniformly distributed heat generated by the human body, and discharges them into the environment. Also disclosed is a method for removing heat generated by themore » human body, comprising the steps of providing a garment to be placed in thermal communication with the body; placing a layer of highly conductive fibers within the garment adjacent the body for uniformly distributing the heat generated by the body; attaching an air-moving device in communication with the garment for forcing air into the garment; removably positioning an exchangeable heat sink in communication with the air-moving device for cooling the air prior to the air entering the garment; and, equipping the garment with a channeled sheet in communication with the air-moving device so that air can be directed into the channeled sheet and adjacent the layer of fibers to expell heat and moisture from the body by the air being directed out of the channeled sheet and into the environment. The cooling system may be configured to operate in both sealed and unsealed garments.« less

  18. Personal cooling apparatus and method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siman-Tov, Moshe; Crabtree, Jerry Allen

    A portable lightweight cooling apparatus for cooling a human body is disclosed, having a channeled sheet which absorbs sweat and/or evaporative liquid, a layer of highly conductive fibers adjacent the channeled sheet; and, an air-moving device for moving air through the channeled sheet, wherein the layer of fibers redistributes heat uniformly across the object being cooled, while the air moving within the channeled sheet evaporates sweat and/or other evaporative liquid, absorbs evaporated moisture and the uniformly distributed heat generated by the human body, and discharges them into the environment. Also disclosed is a method for removing heat generated by themore » human body, comprising the steps of providing a garment to be placed in thermal communication with the body; placing a layer of highly conductive fibers within the garment adjacent the body for uniformly distributing the heat generated by the body; attaching an air-moving device in communication with the garment for forcing air into the garment; removably positioning an exchangeable heat sink in communication with the air-moving device for cooling the air prior to the air entering the garment; and, equipping the garment with a channeled sheet in communication with the air-moving device so that air can be directed into the channeled sheet and adjacent the layer of fibers to expell heat and moisture from the body by the air being directed out of the channeled sheet and into the environment. The cooling system may be configured to operate in both sealed and unsealed garments.« less

  19. Personal cooling apparatus and method

    DOEpatents

    Siman-Tov, Moshe; Crabtree, Jerry Allen

    2001-01-01

    A portable lightweight cooling apparatus for cooling a human body is disclosed, having a channeled sheet which absorbs sweat and/or evaporative liquid, a layer of highly conductive fibers adjacent the channeled sheet; and, an air-moving device for moving air through the channeled sheet, wherein the layer of fibers redistributes heat uniformly across the object being cooled, while the air moving within the channeled sheet evaporates sweat and/or other evaporative liquid, absorbs evaporated moisture and the uniformly distributed heat generated by the human body, and discharges them into the environment. Also disclosed is a method for removing heat generated by the human body, comprising the steps of providing a garment to be placed in thermal communication with the body; placing a layer of highly conductive fibers within the garment adjacent the body for uniformly distributing the heat generated by the body; attaching an air-moving device in communication with the garment for forcing air into the garment; removably positioning an exchangeable heat sink in communication with the air-moving device for cooling the air prior to the air entering the garment; and, equipping the garment with a channeled sheet in communication with the air-moving device so that air can be directed into the channeled sheet and adjacent the layer of fibers to expell heat and moisture from the body by the air being directed out of the channeled sheet and into the environment. The cooling system may be configured to operate in both sealed and unsealed garments.

  20. A Neuromorphic Architecture for Object Recognition and Motion Anticipation Using Burst-STDP

    PubMed Central

    Balduzzi, David; Tononi, Giulio

    2012-01-01

    In this work we investigate the possibilities offered by a minimal framework of artificial spiking neurons to be deployed in silico. Here we introduce a hierarchical network architecture of spiking neurons which learns to recognize moving objects in a visual environment and determine the correct motor output for each object. These tasks are learned through both supervised and unsupervised spike timing dependent plasticity (STDP). STDP is responsible for the strengthening (or weakening) of synapses in relation to pre- and post-synaptic spike times and has been described as a Hebbian paradigm taking place both in vitro and in vivo. We utilize a variation of STDP learning, called burst-STDP, which is based on the notion that, since spikes are expensive in terms of energy consumption, then strong bursting activity carries more information than single (sparse) spikes. Furthermore, this learning algorithm takes advantage of homeostatic renormalization, which has been hypothesized to promote memory consolidation during NREM sleep. Using this learning rule, we design a spiking neural network architecture capable of object recognition, motion detection, attention towards important objects, and motor control outputs. We demonstrate the abilities of our design in a simple environment with distractor objects, multiple objects moving concurrently, and in the presence of noise. Most importantly, we show how this neural network is capable of performing these tasks using a simple leaky-integrate-and-fire (LIF) neuron model with binary synapses, making it fully compatible with state-of-the-art digital neuromorphic hardware designs. As such, the building blocks and learning rules presented in this paper appear promising for scalable fully neuromorphic systems to be implemented in hardware chips. PMID:22615855

  1. "SMALLab": Virtual Geology Studies Using Embodied Learning with Motion, Sound, and Graphics

    ERIC Educational Resources Information Center

    Johnson-Glenberg, Mina C.; Birchfield, David; Usyal, Sibel

    2009-01-01

    We present a new and innovative interface that allows the learner's body to move freely in a multimodal learning environment. The Situated Multimedia Arts Learning Laboratory ("SMALLab") uses 3D object tracking, real time graphics, and surround-sound to enhance embodied learning. Our hypothesis is that optimal learning and retention occur when…

  2. The Importance of Orientation and Mobility Skills for Students Who Are Deaf-Blind. Revised

    ERIC Educational Resources Information Center

    Gense, D. Jay; Gense, Marilyn

    2004-01-01

    Children learn about their environment as they move through it--about people and objects, sizes, shapes, and distances. For typically developing children the senses of sight and hearing provide the greatest motivation for exploration. These children will use their vision and hearing to gather information about their surroundings while growing in…

  3. COMPARISON OF FIELD MEASUREMENTS FROM A CHILDREN'S PESTICIDE STUDY AGAINST PREDICTIONS FROM A PHYSICALLY BASED PROBABILISTIC MODEL FOR ESTIMATING CHILDREN'S RESIDENTIAL EXPOSURE AND DOSE TO CHLORPYRIFOS

    EPA Science Inventory

    Semi-volatile pesticides, such as chlorpyrifos, can move about within a home environment after an application due to physical/chemical processes, resulting in concentration loadings in and on objects and surfaces. Children can be particularly susceptible to the effects of pest...

  4. The Discharging of Roving Objects in the Lunar Polar Regions

    NASA Technical Reports Server (NTRS)

    Jackson, T. L.; Farrell, W. M.; Killen, R. M.; Delory, G. T.; Halekas, J. S.; Stubbs, T. B.

    2012-01-01

    In 2007, the National Academy of Sciences identified the lunar polar regions as special environments: very cold locations where resources can be trapped and accumulated. These accumulated resources not only provide a natural reservoir for human explorers, but their very presence may provide a history of lunar impact events and possibly an indication of ongoing surface reactive chemistry. The recent LCROSS impacts confirm that polar crater floors are rich in material including approx 5%wt of water. An integral part of the special lunar polar environment is the solar wind plasma. Solar wind protons and electrons propagate outward from the Sun, and at the Moon's position have a nominal density of 5 el/cubic cm, flow speed of 400 km/sec, and temperature of 10 eV (approx. equal 116000K). At the sub-solar point, the flow of this plasma is effectively vertically incident at the surface. However, at the poles and along the lunar terminator region, the flow is effectively horizontal over the surface. As recently described, in these regions, local topography has a significant effect on the solar wind flow. Specifically, as the solar wind passes over topographic features like polar mountains and craters, the plasma flow is obstructed and creates a distinct plasma void in the downstream region behind the obstacle. An ion sonic wake structure forms behind the obstacle, not unlike that which forms behind a space shuttle. In the downstream region where flow is obstructed, the faster moving solar wind electrons move into the void region ahead of the more massive ions, thereby creating an ambipolar electric field pointing into the void region. This electric field then deflects ion trajectories into the void region by acting as a vertical inward force that draws ions to the surface. This solar wind 'orographic' effect is somewhat analogous to that occurring with terrestrial mountains. However, in the solar wind, the ambipolar E-field operating in the collision less plasma replaces the gradient in pressure that would act in a collisional neutral gas. Human systems (roving astronauts or robotic systems created by humans) may be required to gain access to the crater floor to collect resources such as water and other cold-trapped material. However, these human systems are also exposed to the above-described harsh thermal and electrical environments in the region. Thus, the objective of this work is to determine the nature of charging and discharging for a roving object in the cold, plasma-starved lunar polar regions. To accomplish this objective, we first define the electrical charging environment within polar craters. We then describe the subsequent charging of a moving object near and within such craters. We apply a model of an astronaut moving in periodic steps/cadence over a surface regolith. In fact the astronaut can be considered an analog for any kind of moving human system. An astronaut stepping over the surface accumulates charge via contact electrification (tribocharging) v.lith the lunar regolith. We present a model of this tribo-charge build-up. Given the environmental plasma in the region, we determine herein the dissipation time for the astronaut to bleed off its excess charge into the surrounding plasma.

  5. Dynamic representation of 3D auditory space in the midbrain of the free-flying echolocating bat

    PubMed Central

    2018-01-01

    Essential to spatial orientation in the natural environment is a dynamic representation of direction and distance to objects. Despite the importance of 3D spatial localization to parse objects in the environment and to guide movement, most neurophysiological investigations of sensory mapping have been limited to studies of restrained subjects, tested with 2D, artificial stimuli. Here, we show for the first time that sensory neurons in the midbrain superior colliculus (SC) of the free-flying echolocating bat encode 3D egocentric space, and that the bat’s inspection of objects in the physical environment sharpens tuning of single neurons, and shifts peak responses to represent closer distances. These findings emerged from wireless neural recordings in free-flying bats, in combination with an echo model that computes the animal’s instantaneous stimulus space. Our research reveals dynamic 3D space coding in a freely moving mammal engaged in a real-world navigation task. PMID:29633711

  6. Real and virtual explorations of the environment and interactive tracking of movable objects for the blind on the basis of tactile-acoustical maps and 3D environment models.

    PubMed

    Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas

    2008-01-01

    PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.

  7. Tracking multiple objects is limited only by object spacing, not by speed, time, or capacity.

    PubMed

    Franconeri, S L; Jonathan, S V; Scimeca, J M

    2010-07-01

    In dealing with a dynamic world, people have the ability to maintain selective attention on a subset of moving objects in the environment. Performance in such multiple-object tracking is limited by three primary factors-the number of objects that one can track, the speed at which one can track them, and how close together they can be. We argue that this last limit, of object spacing, is the root cause of all performance constraints in multiple-object tracking. In two experiments, we found that as long as the distribution of object spacing is held constant, tracking performance is unaffected by large changes in object speed and tracking time. These results suggest that barring object-spacing constraints, people could reliably track an unlimited number of objects as fast as they could track a single object.

  8. Modeling peripheral vision for moving target search and detection.

    PubMed

    Yang, Ji Hyun; Huston, Jesse; Day, Michael; Balogh, Imre

    2012-06-01

    Most target search and detection models focus on foveal vision. In reality, peripheral vision plays a significant role, especially in detecting moving objects. There were 23 subjects who participated in experiments simulating target detection tasks in urban and rural environments while their gaze parameters were tracked. Button responses associated with foveal object and peripheral object (PO) detection and recognition were recorded. In an urban scenario, pedestrians appearing in the periphery holding guns were threats and pedestrians with empty hands were non-threats. In a rural scenario, non-U.S. unmanned aerial vehicles (UAVs) were considered threats and U.S. UAVs non-threats. On average, subjects missed detecting 2.48 POs among 50 POs in the urban scenario and 5.39 POs in the rural scenario. Both saccade reaction time and button reaction time can be predicted by peripheral angle and entrance speed of POs. Fast moving objects were detected faster than slower objects and POs appearing at wider angles took longer to detect than those closer to the gaze center. A second-order mixed-effect model was applied to provide each subject's prediction model for peripheral target detection performance as a function of eccentricity angle and speed. About half the subjects used active search patterns while the other half used passive search patterns. An interactive 3-D visualization tool was developed to provide a representation of macro-scale head and gaze movement in the search and target detection task. An experimentally validated stochastic model of peripheral vision in realistic target detection scenarios was developed.

  9. Independent motion detection with a rival penalized adaptive particle filter

    NASA Astrophysics Data System (ADS)

    Becker, Stefan; Hübner, Wolfgang; Arens, Michael

    2014-10-01

    Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.

  10. Neural substrates of dynamic object occlusion.

    PubMed

    Shuwairi, Sarah M; Curtis, Clayton E; Johnson, Scott P

    2007-08-01

    In everyday environments, objects frequently go out of sight as they move and our view of them becomes obstructed by nearer objects, yet we perceive these objects as continuous and enduring entities. Here, we used functional magnetic resonance imaging with an attentive tracking paradigm to clarify the nature of perceptual and cognitive mechanisms subserving this ability to fill in the gaps in perception of dynamic object occlusion. Imaging data revealed distinct regions of cortex showing increased activity during periods of occlusion relative to full visibility. These regions may support active maintenance of a representation of the target's spatiotemporal properties ensuring that the object is perceived as a persisting entity when occluded. Our findings may shed light on the neural substrates involved in object tracking that give rise to the phenomenon of object permanence.

  11. Altering User Movement Behaviour in Virtual Environments.

    PubMed

    Simeone, Adalberto L; Mavridou, Ifigeneia; Powell, Wendy

    2017-04-01

    In immersive Virtual Reality systems, users tend to move in a Virtual Environment as they would in an analogous physical environment. In this work, we investigated how user behaviour is affected when the Virtual Environment differs from the physical space. We created two sets of four environments each, plus a virtual replica of the physical environment as a baseline. The first focused on aesthetic discrepancies, such as a water surface in place of solid ground. The second focused on mixing immaterial objects together with those paired to tangible objects. For example, barring an area with walls or obstacles. We designed a study where participants had to reach three waypoints laid out in such a way to prompt a decision on which path to follow based on the conflict between the mismatching visual stimuli and their awareness of the real layout of the room. We analysed their performances to determine whether their trajectories were altered significantly from the shortest route. Our results indicate that participants altered their trajectories in presence of surfaces representing higher walking difficulty (for example, water instead of grass). However, when the graphical appearance was found to be ambiguous, there was no significant trajectory alteration. The environments mixing immaterial with physical objects had the most impact on trajectories with a mean deviation from the shortest route of 60 cm against the 37 cm of environments with aesthetic alterations. The co-existance of paired and unpaired virtual objects was reported to support the idea that all objects participants saw were backed by physical props. From these results and our observations, we derive guidelines on how to alter user movement behaviour in Virtual Environments.

  12. Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study

    NASA Astrophysics Data System (ADS)

    Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad

    2018-01-01

    The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.

  13. CART V: recent advancements in computer-aided camouflage assessment

    NASA Astrophysics Data System (ADS)

    Müller, Thomas; Müller, Markus

    2011-05-01

    In order to facilitate systematic, computer aided improvements of camouflage and concealment assessment methods, the software system CART (Camouflage Assessment in Real-Time) was built up for the camouflage assessment of objects in multispectral image sequences (see contributions to SPIE 2007-2010 [1], [2], [3], [4]). It comprises a semi-automatic marking of target objects (ground truth generation) including their propagation over the image sequence and the evaluation via user-defined feature extractors as well as methods to assess the object's movement conspicuity. In this fifth part in an annual series at the SPIE conference in Orlando, this paper presents the enhancements over the recent year and addresses the camouflage assessment of static and moving objects in multispectral image data that can show noise or image artefacts. The presented methods fathom the correlations between image processing and camouflage assessment. A novel algorithm is presented based on template matching to assess the structural inconspicuity of an object objectively and quantitatively. The results can easily be combined with an MTI (moving target indication) based movement conspicuity assessment function in order to explore the influence of object movement to a camouflage effect in different environments. As the results show, the presented methods contribute to a significant benefit in the field of camouflage assessment.

  14. Anticipating the effects of gravity when intercepting moving objects: differentiating up and down based on nonvisual cues.

    PubMed

    Senot, Patrice; Zago, Myrka; Lacquaniti, Francesco; McIntyre, Joseph

    2005-12-01

    Intercepting an object requires a precise estimate of its time of arrival at the interception point (time to contact or "TTC"). It has been proposed that knowledge about gravitational acceleration can be combined with first-order, visual-field information to provide a better estimate of TTC when catching falling objects. In this experiment, we investigated the relative role of visual and nonvisual information on motor-response timing in an interceptive task. Subjects were immersed in a stereoscopic virtual environment and asked to intercept with a virtual racket a ball falling from above or rising from below. The ball moved with different initial velocities and could accelerate, decelerate, or move at a constant speed. Depending on the direction of motion, the acceleration or deceleration of the ball could therefore be congruent or not with the acceleration that would be expected due to the force of gravity acting on the ball. Although the best success rate was observed for balls moving at a constant velocity, we systematically found a cross-effect of ball direction and acceleration on success rate and response timing. Racket motion was triggered on average 25 ms earlier when the ball fell from above than when it rose from below, whatever the ball's true acceleration. As visual-flow information was the same in both cases, this shift indicates an influence of the ball's direction relative to gravity on response timing, consistent with the anticipation of the effects of gravity on the flight of the ball.

  15. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  16. Real-time detection of moving objects from moving vehicles using dense stereo and optical flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  17. Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2004-01-01

    Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.

  18. A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors

    PubMed Central

    Mishra, Abhishek; Ghosh, Rohan; Principe, Jose C.; Thakor, Nitish V.; Kukreja, Sunil L.

    2017-01-01

    Motion segmentation is a critical pre-processing step for autonomous robotic systems to facilitate tracking of moving objects in cluttered environments. Event based sensors are low power analog devices that represent a scene by means of asynchronous information updates of only the dynamic details at high temporal resolution and, hence, require significantly less calculations. However, motion segmentation using spatiotemporal data is a challenging task due to data asynchrony. Prior approaches for object tracking using neuromorphic sensors perform well while the sensor is static or a known model of the object to be followed is available. To address these limitations, in this paper we develop a technique for generalized motion segmentation based on spatial statistics across time frames. First, we create micromotion on the platform to facilitate the separation of static and dynamic elements of a scene, inspired by human saccadic eye movements. Second, we introduce the concept of spike-groups as a methodology to partition spatio-temporal event groups, which facilitates computation of scene statistics and characterize objects in it. Experimental results show that our algorithm is able to classify dynamic objects with a moving camera with maximum accuracy of 92%. PMID:28316563

  19. A traffic priority language for collision-free navigation of autonomous mobile robots in dynamic environments.

    PubMed

    Bourbakis, N G

    1997-01-01

    This paper presents a generic traffic priority language, called KYKLOFORTA, used by autonomous robots for collision-free navigation in a dynamic unknown or known navigation space. In a previous work by X. Grossmman (1988), a set of traffic control rules was developed for the navigation of the robots on the lines of a two-dimensional (2-D) grid and a control center coordinated and synchronized their movements. In this work, the robots are considered autonomous: they are moving anywhere and in any direction inside the free space, and there is no need of a central control to coordinate and synchronize them. The requirements for each robot are i) visual perception, ii) range sensors, and iii) the ability of each robot to detect other moving objects in the same free navigation space, define the other objects perceived size, their velocity and their directions. Based on these assumptions, a traffic priority language is needed for each robot, making it able to decide during the navigation and avoid possible collision with other moving objects. The traffic priority language proposed here is based on a set of primitive traffic priority alphabet and rules which compose pattern of corridors for the application of the traffic priority rules.

  20. Mobile TV: Customizing Content and Experience

    NASA Astrophysics Data System (ADS)

    Marcus, Aaron; Roibás, Anxo Cereijo; Sala, Riccardo

    This book showcases new mobile TV systems that require customization according to specific users' needs in changing physical environments. These projects and studies, carried out in academia and in industry, promote the awareness of interdisciplinary methods and tools for designing novel solutions. Their objective is to enhance the value of the information they convey while improving the users' enjoyment of it on the move.

  1. The GLSEN Workbook: A Development Model for Assessing, Describing and Improving Schools for Lesbian, Gay, Bisexual and Transgender (LGBT) People.

    ERIC Educational Resources Information Center

    Gay, Lesbian, and Straight Education Network, New York, NY.

    This workbook provides an instrument to objectively analyze a school's current climate with regard to lesbian, gay, bisexual, and transgendered (LGBT) people and the steps needed to move that school toward a more inclusive environment. It provides a detailed assessment survey (to be completed by key school stakeholders), descriptive data, and…

  2. Laboratory Assessment of Commercially Available Ultrasonic Rangefinders

    DTIC Science & Technology

    2015-11-01

    how the room was designed to prevent sound reflections (a combination of the wedges absorbing the waveforms and not having a flat wall ). When testing... sound booth at 0.5 m. ...................................................................................... 5  iv This page is intentionally...environments for sound measurements using a tape measure. This mapping method can be time- consuming and unreliable as objects frequently move around in

  3. Subjective evaluation of HEVC in mobile devices

    NASA Astrophysics Data System (ADS)

    Garcia, Ray; Kalva, Hari

    2013-03-01

    Mobile compute environments provide a unique set of user needs and expectations that designers must consider. With increased multimedia use in mobile environments, video encoding methods within the smart phone market segment are key factors that contribute to positive user experience. Currently available display resolutions and expected cellular bandwidth are major factors the designer must consider when determining which encoding methods should be supported. The desired goal is to maximize the consumer experience, reduce cost, and reduce time to market. This paper presents a comparative evaluation of the quality of user experience when HEVC and AVC/H.264 video coding standards were used. The goal of the study was to evaluate any improvements in user experience when using HEVC. Subjective comparisons were made between H.264/AVC and HEVC encoding standards in accordance with Doublestimulus impairment scale (DSIS) as defined by ITU-R BT.500-13. Test environments are based on smart phone LCD resolutions and expected cellular bit rates, such as 200kbps and 400kbps. Subjective feedback shows both encoding methods are adequate at 400kbps constant bit rate. However, a noticeable consumer experience gap was observed for 200 kbps. Significantly less H.264 subjective quality is noticed with video sequences that have multiple objects moving and no single point of visual attraction. Video sequences with single points of visual attraction or few moving objects tended to have higher H.264 subjective quality.

  4. Effects of Implied Motion and Facing Direction on Positional Preferences in Single-Object Pictures.

    PubMed

    Palmer, Stephen E; Langlois, Thomas A

    2017-07-01

    Palmer, Gardner, and Wickens studied aesthetic preferences for pictures of single objects and found a strong inward bias: Right-facing objects were preferred left-of-center and left-facing objects right-of-center. They found no effect of object motion (people and cars showed the same inward bias as chairs and teapots), but the objects were not depicted as moving. Here we measured analogous inward biases with objects depicted as moving with an implied direction and speed by having participants drag-and-drop target objects into the most aesthetically pleasing position. In Experiment 1, human figures were shown diving or falling while moving forward or backward. Aesthetic biases were evident for both inward-facing and inward-moving figures, but the motion-based bias dominated so strongly that backward divers or fallers were preferred moving inward but facing outward. Experiment 2 investigated implied speed effects using images of humans, horses, and cars moving at different speeds (e.g., standing, walking, trotting, and galloping horses). Inward motion or facing biases were again present, and differences in their magnitude due to speed were evident. Unexpectedly, faster moving objects were generally preferred closer to frame center than slower moving objects. These results are discussed in terms of the combined effects of prospective, future-oriented biases, and retrospective, past-oriented biases.

  5. Towards Gesture-Based Multi-User Interactions in Collaborative Virtual Environments

    NASA Astrophysics Data System (ADS)

    Pretto, N.; Poiesi, F.

    2017-11-01

    We present a virtual reality (VR) setup that enables multiple users to participate in collaborative virtual environments and interact via gestures. A collaborative VR session is established through a network of users that is composed of a server and a set of clients. The server manages the communication amongst clients and is created by one of the users. Each user's VR setup consists of a Head Mounted Display (HMD) for immersive visualisation, a hand tracking system to interact with virtual objects and a single-hand joypad to move in the virtual environment. We use Google Cardboard as a HMD for the VR experience and a Leap Motion for hand tracking, thus making our solution low cost. We evaluate our VR setup though a forensics use case, where real-world objects pertaining to a simulated crime scene are included in a VR environment, acquired using a smartphone-based 3D reconstruction pipeline. Users can interact using virtual gesture-based tools such as pointers and rulers.

  6. Effects of radial direction and eccentricity on acceleration perception.

    PubMed

    Mueller, Alexandra S; Timney, Brian

    2014-01-01

    Radial optic flow can elicit impressions of self-motion--vection--or of objects moving relative to the observer, but there is disagreement as to whether humans have greater sensitivity to expanding or to contracting optic flow. Although most studies agree there is an anisotropy in sensitivity to radial optic flow, it is unclear whether this asymmetry is a function of eccentricity. The issue is further complicated by the fact that few studies have examined how acceleration sensitivity is affected, even though observers and objects in the environment seldom move at a constant speed. To address these issues, we investigated the effects of direction and eccentricity on the ability to detect acceleration in radial optic flow. Our results indicate that observers are better at detecting acceleration when viewing contraction compared with expansion and that eccentricity has no effect on the ability to detect accelerating radial optic flow. Ecological interpretations are discussed.

  7. Line grouping using perceptual saliency and structure prediction for car detection in traffic scenes

    NASA Astrophysics Data System (ADS)

    Denasi, Sandra; Quaglia, Giorgio

    1993-08-01

    Autonomous and guide assisted vehicles make a heavy use of computer vision techniques to perceive the environment where they move. In this context, the European PROMETHEUS program is carrying on activities in order to develop autonomous vehicle monitoring that assists people to achieve safer driving. Car detection is one of the topics that are faced by the program. Our contribution proposes the development of this task in two stages: the localization of areas of interest and the formulation of object hypotheses. In particular, the present paper proposes a new approach that builds structural descriptions of objects from edge segmentations by using geometrical organization. This approach has been applied to the detection of cars in traffic scenes. We have analyzed images taken from a moving vehicle in order to formulate obstacle hypotheses: preliminary results confirm the efficiency of the method.

  8. Grip force control during virtual object interaction: effect of force feedback,accuracy demands, and training.

    PubMed

    Gibo, Tricia L; Bastian, Amy J; Okamura, Allison M

    2014-03-01

    When grasping and manipulating objects, people are able to efficiently modulate their grip force according to the experienced load force. Effective grip force control involves providing enough grip force to prevent the object from slipping, while avoiding excessive force to avoid damage and fatigue. During indirect object manipulation via teleoperation systems or in virtual environments, users often receive limited somatosensory feedback about objects with which they interact. This study examines the effects of force feedback, accuracy demands, and training on grip force control during object interaction in a virtual environment. The task required subjects to grasp and move a virtual object while tracking a target. When force feedback was not provided, subjects failed to couple grip and load force, a capability fundamental to direct object interaction. Subjects also exerted larger grip force without force feedback and when accuracy demands of the tracking task were high. In addition, the presence or absence of force feedback during training affected subsequent performance, even when the feedback condition was switched. Subjects' grip force control remained reminiscent of their employed grip during the initial training. These results motivate the use of force feedback during telemanipulation and highlight the effect of force feedback during training.

  9. Dynamic NMDAR-mediated properties of place cells during the object place memory task.

    PubMed

    Faust, Thomas W; Robbiati, Sergio; Huerta, Tomás S; Huerta, Patricio T

    2013-01-01

    N-methyl-D-aspartate receptors (NMDAR) in the hippocampus participate in encoding and recalling the location of objects in the environment, but the ensemble mechanisms by which NMDARs mediate these processes have not been completely elucidated. To address this issue, we examined the firing patterns of place cells in the dorsal CA1 area of the hippocampus of mice (n = 7) that performed an object place memory (OPM) task, consisting of familiarization (T1), sample (T2), and choice (T3) trials, after systemic injection of 3-[(±)2-carboxypiperazin-4yl]propyl-1-phosphate (CPP), a specific NMDAR antagonist. Place cell properties under CPP (CPP-PCs) were compared to those after control saline injection (SAL-PCs) in the same mice. We analyzed place cells across the OPM task to determine whether they signaled the introduction or movement of objects by NMDAR-mediated changes of their spatial coding. On T2, when two objects were first introduced to a familiar chamber, CPP-PCs and SAL-PCs showed stable, vanishing or moving place fields in addition to changes in spatial information (SI). These metrics were comparable between groups. Remarkably, previously inactive CPP-PCs (with place fields emerging de novo on T2) had significantly weaker SI increases than SAL-PCs. On T3, when one object was moved, CPP-PCs showed reduced center-of-mass (COM) shift of their place fields. Indeed, a subset of SAL-PCs with large COM shifts (>7 cm) was largely absent in the CPP condition. Notably, for SAL-PCs that exhibited COM shifts, those initially close to the moving object followed the trajectory of the object, whereas those far from the object did the opposite. Our results strongly suggest that the SI changes and COM shifts of place fields that occur during the OPM task reflect key dynamic properties that are mediated by NMDARs and might be responsible for binding object identity with location.

  10. Dynamic NMDAR-mediated properties of place cells during the object place memory task

    PubMed Central

    Faust, Thomas W.; Robbiati, Sergio; Huerta, Tomás S.; Huerta, Patricio T.

    2013-01-01

    N-methyl-D-aspartate receptors (NMDAR) in the hippocampus participate in encoding and recalling the location of objects in the environment, but the ensemble mechanisms by which NMDARs mediate these processes have not been completely elucidated. To address this issue, we examined the firing patterns of place cells in the dorsal CA1 area of the hippocampus of mice (n = 7) that performed an object place memory (OPM) task, consisting of familiarization (T1), sample (T2), and choice (T3) trials, after systemic injection of 3-[(±)2-carboxypiperazin-4yl]propyl-1-phosphate (CPP), a specific NMDAR antagonist. Place cell properties under CPP (CPP–PCs) were compared to those after control saline injection (SAL–PCs) in the same mice. We analyzed place cells across the OPM task to determine whether they signaled the introduction or movement of objects by NMDAR-mediated changes of their spatial coding. On T2, when two objects were first introduced to a familiar chamber, CPP–PCs and SAL–PCs showed stable, vanishing or moving place fields in addition to changes in spatial information (SI). These metrics were comparable between groups. Remarkably, previously inactive CPP–PCs (with place fields emerging de novo on T2) had significantly weaker SI increases than SAL–PCs. On T3, when one object was moved, CPP–PCs showed reduced center-of-mass (COM) shift of their place fields. Indeed, a subset of SAL–PCs with large COM shifts (>7 cm) was largely absent in the CPP condition. Notably, for SAL–PCs that exhibited COM shifts, those initially close to the moving object followed the trajectory of the object, whereas those far from the object did the opposite. Our results strongly suggest that the SI changes and COM shifts of place fields that occur during the OPM task reflect key dynamic properties that are mediated by NMDARs and might be responsible for binding object identity with location. PMID:24381547

  11. High-performance integrated virtual environment (HIVE): a robust infrastructure for next-generation sequence data analysis

    PubMed Central

    Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E.; Tkachenko, Valery; Torcivia-Rodriguez, John; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja

    2016-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure. The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu PMID:26989153

  12. High-performance integrated virtual environment (HIVE): a robust infrastructure for next-generation sequence data analysis.

    PubMed

    Simonyan, Vahan; Chumakov, Konstantin; Dingerdissen, Hayley; Faison, William; Goldweber, Scott; Golikov, Anton; Gulzar, Naila; Karagiannis, Konstantinos; Vinh Nguyen Lam, Phuc; Maudru, Thomas; Muravitskaja, Olesja; Osipova, Ekaterina; Pan, Yang; Pschenichnov, Alexey; Rostovtsev, Alexandre; Santana-Quintero, Luis; Smith, Krista; Thompson, Elaine E; Tkachenko, Valery; Torcivia-Rodriguez, John; Voskanian, Alin; Wan, Quan; Wang, Jing; Wu, Tsung-Jung; Wilson, Carolyn; Mazumder, Raja

    2016-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a distributed storage and compute environment designed primarily to handle next-generation sequencing (NGS) data. This multicomponent cloud infrastructure provides secure web access for authorized users to deposit, retrieve, annotate and compute on NGS data, and to analyse the outcomes using web interface visual environments appropriately built in collaboration with research and regulatory scientists and other end users. Unlike many massively parallel computing environments, HIVE uses a cloud control server which virtualizes services, not processes. It is both very robust and flexible due to the abstraction layer introduced between computational requests and operating system processes. The novel paradigm of moving computations to the data, instead of moving data to computational nodes, has proven to be significantly less taxing for both hardware and network infrastructure.The honeycomb data model developed for HIVE integrates metadata into an object-oriented model. Its distinction from other object-oriented databases is in the additional implementation of a unified application program interface to search, view and manipulate data of all types. This model simplifies the introduction of new data types, thereby minimizing the need for database restructuring and streamlining the development of new integrated information systems. The honeycomb model employs a highly secure hierarchical access control and permission system, allowing determination of data access privileges in a finely granular manner without flooding the security subsystem with a multiplicity of rules. HIVE infrastructure will allow engineers and scientists to perform NGS analysis in a manner that is both efficient and secure. HIVE is actively supported in public and private domains, and project collaborations are welcomed. Database URL: https://hive.biochemistry.gwu.edu. © The Author(s) 2016. Published by Oxford University Press.

  13. Going, Going, Gone: Localizing Abrupt Offsets of Moving Objects

    ERIC Educational Resources Information Center

    Maus, Gerrit W.; Nijhawan, Romi

    2009-01-01

    When a moving object abruptly disappears, this profoundly influences its localization by the visual system. In Experiment 1, 2 aligned objects moved across the screen, and 1 of them abruptly disappeared. Observers reported seeing the objects misaligned at the time of the offset, with the continuing object leading. Experiment 2 showed that the…

  14. Random walk of passive tracers among randomly moving obstacles.

    PubMed

    Gori, Matteo; Donato, Irene; Floriani, Elena; Nardecchia, Ilaria; Pettini, Marco

    2016-04-14

    This study is mainly motivated by the need of understanding how the diffusion behavior of a biomolecule (or even of a larger object) is affected by other moving macromolecules, organelles, and so on, inside a living cell, whence the possibility of understanding whether or not a randomly walking biomolecule is also subject to a long-range force field driving it to its target. By means of the Continuous Time Random Walk (CTRW) technique the topic of random walk in random environment is here considered in the case of a passively diffusing particle among randomly moving and interacting obstacles. The relevant physical quantity which is worked out is the diffusion coefficient of the passive tracer which is computed as a function of the average inter-obstacles distance. The results reported here suggest that if a biomolecule, let us call it a test molecule, moves towards its target in the presence of other independently interacting molecules, its motion can be considerably slowed down.

  15. Advanced Technology for Portable Personal Visualization.

    DTIC Science & Technology

    1992-06-01

    interactive radiosity . 6 Advanced Technology for Portable Personal Visualization Progress Report January-June 1992 9 2.5 Virtual-Environment Ultrasound...the system, with support for textures, model partitioning, more complex radiosity emitters, and the replacement of model parts with objects from our...model libraries. "* Add real-time, interactive radiosity to the display program on Pixel-Planes 5. "* Move the real-time model mesh-generation to the

  16. An Analysis of Pedagogical Moves for Facilitating the Development of In-Service Middle-School Mathematics Teachers' Recognition of Reasoning

    ERIC Educational Resources Information Center

    Cipriani, Phyllis J.

    2017-01-01

    A constructivist approach for teaching and learning mathematics was the foundation for a longitudinal study at Rutgers University in 1987 (Maher, 2011). One of the objectives of the longitudinal study was to provide an environment where students solve problems in collaborative groups (Maher, 2011). Videos from the longitudinal study are stored in…

  17. A visual horizon affects steering responses during flight in fruit flies.

    PubMed

    Caballero, Jorge; Mazo, Chantell; Rodriguez-Pinto, Ivan; Theobald, Jamie C

    2015-09-01

    To navigate well through three-dimensional environments, animals must in some way gauge the distances to objects and features around them. Humans use a variety of visual cues to do this, but insects, with their small size and rigid eyes, are constrained to a more limited range of possible depth cues. For example, insects attend to relative image motion when they move, but cannot change the optical power of their eyes to estimate distance. On clear days, the horizon is one of the most salient visual features in nature, offering clues about orientation, altitude and, for humans, distance to objects. We set out to determine whether flying fruit flies treat moving features as farther off when they are near the horizon. Tethered flies respond strongly to moving images they perceive as close. We measured the strength of steering responses while independently varying the elevation of moving stimuli and the elevation of a virtual horizon. We found responses to vertical bars are increased by negative elevations of their bases relative to the horizon, closely correlated with the inverse of apparent distance. In other words, a bar that dips far below the horizon elicits a strong response, consistent with using the horizon as a depth cue. Wide-field motion also had an enhanced effect below the horizon, but this was only prevalent when flies were additionally motivated with hunger. These responses may help flies tune behaviors to nearby objects and features when they are too far off for motion parallax. © 2015. Published by The Company of Biologists Ltd.

  18. Detection and Tracking of Moving Objects with Real-Time Onboard Vision System

    NASA Astrophysics Data System (ADS)

    Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.

    2017-05-01

    Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.

  19. Reinforcement active learning in the vibrissae system: optimal object localization.

    PubMed

    Gordon, Goren; Dorfman, Nimrod; Ahissar, Ehud

    2013-01-01

    Rats move their whiskers to acquire information about their environment. It has been observed that they palpate novel objects and objects they are required to localize in space. We analyze whisker-based object localization using two complementary paradigms, namely, active learning and intrinsic-reward reinforcement learning. Active learning algorithms select the next training samples according to the hypothesized solution in order to better discriminate between correct and incorrect labels. Intrinsic-reward reinforcement learning uses prediction errors as the reward to an actor-critic design, such that behavior converges to the one that optimizes the learning process. We show that in the context of object localization, the two paradigms result in palpation whisking as their respective optimal solution. These results suggest that rats may employ principles of active learning and/or intrinsic reward in tactile exploration and can guide future research to seek the underlying neuronal mechanisms that implement them. Furthermore, these paradigms are easily transferable to biomimetic whisker-based artificial sensors and can improve the active exploration of their environment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Real-Time Tracking by Double Templates Matching Based on Timed Motion History Image with HSV Feature

    PubMed Central

    Li, Zhiyong; Li, Pengfei; Yu, Xiaoping; Hashem, Mervat

    2014-01-01

    It is a challenge to represent the target appearance model for moving object tracking under complex environment. This study presents a novel method with appearance model described by double templates based on timed motion history image with HSV color histogram feature (tMHI-HSV). The main components include offline template and online template initialization, tMHI-HSV-based candidate patches feature histograms calculation, double templates matching (DTM) for object location, and templates updating. Firstly, we initialize the target object region and calculate its HSV color histogram feature as offline template and online template. Secondly, the tMHI-HSV is used to segment the motion region and calculate these candidate object patches' color histograms to represent their appearance models. Finally, we utilize the DTM method to trace the target and update the offline template and online template real-timely. The experimental results show that the proposed method can efficiently handle the scale variation and pose change of the rigid and nonrigid objects, even in illumination change and occlusion visual environment. PMID:24592185

  1. A-Track: Detecting Moving Objects in FITS images

    NASA Astrophysics Data System (ADS)

    Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.

    2017-04-01

    A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.

  2. Selectivity to Translational Egomotion in Human Brain Motion Areas

    PubMed Central

    Pitzalis, Sabrina; Sdoia, Stefano; Bultrini, Alessandro; Committeri, Giorgia; Di Russo, Francesco; Fattori, Patrizia; Galletti, Claudio; Galati, Gaspare

    2013-01-01

    The optic flow generated when a person moves through the environment can be locally decomposed into several basic components, including radial, circular, translational and spiral motion. Since their analysis plays an important part in the visual perception and control of locomotion and posture it is likely that some brain regions in the primate dorsal visual pathway are specialized to distinguish among them. The aim of this study is to explore the sensitivity to different types of egomotion-compatible visual stimulations in the human motion-sensitive regions of the brain. Event-related fMRI experiments, 3D motion and wide-field stimulation, functional localizers and brain mapping methods were used to study the sensitivity of six distinct motion areas (V6, MT, MST+, V3A, CSv and an Intra-Parietal Sulcus motion [IPSmot] region) to different types of optic flow stimuli. Results show that only areas V6, MST+ and IPSmot are specialized in distinguishing among the various types of flow patterns, with a high response for the translational flow which was maximum in V6 and IPSmot and less marked in MST+. Given that during egomotion the translational optic flow conveys differential information about the near and far external objects, areas V6 and IPSmot likely process visual egomotion signals to extract information about the relative distance of objects with respect to the observer. Since area V6 is also involved in distinguishing object-motion from self-motion, it could provide information about location in space of moving and static objects during self-motion, particularly in a dynamically unstable environment. PMID:23577096

  3. Moving object detection using dynamic motion modelling from UAV aerial images.

    PubMed

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2014-01-01

    Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.

  4. Social referencing in dog-owner dyads?

    PubMed

    Merola, I; Prato-Previde, E; Marshall-Pescini, S

    2012-03-01

    Social referencing is the seeking of information from another individual to form one's own understanding and guide action. In this study, adult dogs were tested in a social referencing paradigm involving their owner and a potentially scary object. Dogs received either a positive or negative message from the owner. The aim was to evaluate the presence of referential looking to the owner, behavioural regulation based on the owner's (vocal and facial) emotional message and observational conditioning following the owner's actions towards the object. Most dogs (83%) looked referentially to the owner after looking at the strange object, thus they appear to seek information about the environment from the human, but little differences were found between dogs in the positive and negative groups as regards behavioural regulation: possible explanations for this are discussed. Finally, a strong effect of observational conditioning was found with dogs in the positive group moving closer to the fan and dogs in the negative group moving away, both mirroring their owner's behaviour. Results are discussed in relation to studies on human-dog communication, attachment and social learning.

  5. Transportable Applications Environment Plus, Version 5.1

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Transportable Applications Environment Plus (TAE+) computer program providing integrated, portable programming environment for developing and running application programs based on interactive windows, text, and graphical objects. Enables both programmers and nonprogrammers to construct own custom application interfaces easily and to move interfaces and application programs to different computers. Used to define corporate user interface, with noticeable improvements in application developer's and end user's learning curves. Main components are; WorkBench, What You See Is What You Get (WYSIWYG) software tool for design and layout of user interface; and WPT (Window Programming Tools) Package, set of callable subroutines controlling user interface of application program. WorkBench and WPT's written in C++, and remaining code written in C.

  6. Persistence-Driven Durotaxis: Generic, Directed Motility in Rigidity Gradients

    NASA Astrophysics Data System (ADS)

    Novikova, Elizaveta A.; Raab, Matthew; Discher, Dennis E.; Storm, Cornelis

    2017-02-01

    Cells move differently on substrates with different rigidities: the persistence time of their motion is higher on stiffer substrates. We show that this behavior—in and of itself—results in a net flux of cells directed up a soft-to-stiff gradient. Using simple random walk models with varying persistence and stochastic simulations, we characterize the propensity to move in terms of the durotactic index also measured in experiments. A one-dimensional model captures the essential features and highlights the competition between diffusive spreading and linear, wavelike propagation. Persistence-driven durokinesis is generic and may be of use in the design of instructive environments for cells and other motile, mechanosensitive objects.

  7. Planning safer suburbs: do changes in the built environment influence residents' perceptions of crime risk?

    PubMed

    Foster, Sarah; Wood, Lisa; Christian, Hayley; Knuiman, Matthew; Giles-Corti, Billie

    2013-11-01

    A growing body of evidence has reiterated the negative impacts that crime and perceptions of insecurity can have on the health and wellbeing of local residents. Strategies that reduce residents' perceived crime risk may contribute to improved health outcomes; however interventions require a better understanding of the neighbourhood influences on residents perceptions of crime and safety. We examined the impact of changes in the objective built environment following relocation on changes in residents' perceived crime risk for participants in a longitudinal study of people moving to new neighbourhoods in Perth, Western Australia (n = 1159). They completed a questionnaire before moving to their new neighbourhood, and again 36 months after relocation. Individual-level objective environmental measures were generated at both time points using Geographic Information Systems, focussing on the characteristics that comprise a 'walkable neighbourhood'. Linear regression models examined the influence of objective environmental changes between the two environments on perceived crime risk, with progressive adjustment for other change variables (i.e., perceptions of the physical and social environment, reported crime). We found that increases in the proportion of land allocated to shopping/retail land-uses increased residents' perceived crime risk (β = 11.875, p = 0.001), and this relationship remained constant, despite controlling for other influences on perceived crime risk (β = 9.140, p = 0.004). The findings highlight an important paradox: that the neighbourhood characteristics known to enhance one outcome, such as walking, may negatively impact another. In this instance, the 'strangers' that retail destinations attract to a neighbourhood may be interpreted by locals as a threat to safety. Thus, in areas with more retail destinations, it is vital that other environmental strategies be employed to balance any negative effects that retail may have on residents' perceptions of crime risk (e.g., minimising incivilities, improved lighting and aesthetics). Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. An optimal control strategy for collision avoidance of mobile robots in non-stationary environments

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An optimal control formulation of the problem of collision avoidance of mobile robots in environments containing moving obstacles is presented. Collision avoidance is guaranteed if the minimum distance between the robot and the objects is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. Furthermore, time consistency with the nominal plan is desirable. A numerical solution of the optimization problem is obtained. Simulation results verify the value of the proposed strategy.

  9. Location detection and tracking of moving targets by a 2D IR-UWB radar system.

    PubMed

    Nguyen, Van-Han; Pyun, Jae-Young

    2015-03-19

    In indoor environments, the Global Positioning System (GPS) and long-range tracking radar systems are not optimal, because of signal propagation limitations in the indoor environment. In recent years, the use of ultra-wide band (UWB) technology has become a possible solution for object detection, localization and tracking in indoor environments, because of its high range resolution, compact size and low cost. This paper presents improved target detection and tracking techniques for moving objects with impulse-radio UWB (IR-UWB) radar in a short-range indoor area. This is achieved through signal-processing steps, such as clutter reduction, target detection, target localization and tracking. In this paper, we introduce a new combination consisting of our proposed signal-processing procedures. In the clutter-reduction step, a filtering method that uses a Kalman filter (KF) is proposed. Then, in the target detection step, a modification of the conventional CLEAN algorithm which is used to estimate the impulse response from observation region is applied for the advanced elimination of false alarms. Then, the output is fed into the target localization and tracking step, in which the target location and trajectory are determined and tracked by using unscented KF in two-dimensional coordinates. In each step, the proposed methods are compared to conventional methods to demonstrate the differences in performance. The experiments are carried out using actual IR-UWB radar under different scenarios. The results verify that the proposed methods can improve the probability and efficiency of target detection and tracking.

  10. USGS Science Data Life Cycle Tools - Lessons Learned in moving to the Cloud

    NASA Astrophysics Data System (ADS)

    Frame, M. T.; Mancuso, T.; Hutchison, V.; Zolly, L.; Wheeler, B.; Urbanowski, S.; Devarakonda, R.; Palanisamy, G.

    2016-12-01

    The U.S Geological Survey (USGS) Core Science Systems has been working for the past year to design, re-architect, and implement several key tools and systems within the USGS Cloud Hosting Service supported by Amazon Web Services (AWS). As a result of emerging USGS data management policies that align with federal Open Data mandates, and as part of a concerted effort to respond to potential increasing user demand due to these policies, the USGS strategically began migrating its core data management tools and services to the AWS environment in hopes of leveraging cloud capabilities (i.e. auto-scaling, replication, etc.). The specific tools included: USGS Online Metadata Editor (OME); USGS Digital Object Identifier (DOI) generation tool; USGS Science Data Catalog (SDC); USGS ScienceBase system; and an integrative tool, the USGS Data Release Workbench, which steps bureau personnel through the process of releasing data. All of these tools existed long before the Cloud was available and presented significant challenges in migrating, re-architecting, securing, and moving to a Cloud based environment. Initially, a `lift and shift' approach, essentially moving as is, was attempted and various lessons learned about that approach will be discussed, along with recommendations that resulted from the development and eventual operational implementation of these tools. The session will discuss lessons learned related to management of these tools in an AWS environment; re-architecture strategies utilized for the tools; time investments through sprint allocations; initial benefits observed from operating within a Cloud based environment; and initial costs to support these data management tools.

  11. Impact of Advanced Avionics Technology on Ground Attack Weapon Systems.

    DTIC Science & Technology

    1982-02-01

    as the relevant feature. 3.0 Problem The task is to perform the automatic cueing of moving objects in a natural environment . Additional problems...views on this subject to the American Defense Preparedness Association (ADPA) on 11 February 1981 in Orlando, Florida. ENVIRONMENTAL CONDITIONS OUR...the operating window or the environmental conditions of combat that our forces may encounter worldwide. The three areas selected were Europe, the

  12. Objects in Motion

    ERIC Educational Resources Information Center

    Damonte, Kathleen

    2004-01-01

    One thing scientists study is how objects move. A famous scientist named Sir Isaac Newton (1642-1727) spent a lot of time observing objects in motion and came up with three laws that describe how things move. This explanation only deals with the first of his three laws of motion. Newton's First Law of Motion says that moving objects will continue…

  13. Non-GPS full position and angular orientation onboard sensors for moving and stationary platforms

    NASA Astrophysics Data System (ADS)

    Dhadwal, Harbans S.; Rastegar, Jahangir; Feng, Dake; Kwok, Philip; Pereira, Carlos M.

    2016-05-01

    Angular orientation of both mobile and stationary objects continues to be an ongoing topic of interest for guidance and control as well as for non-GPS based solutions for geolocations of assets in any environment. Currently available sensors, which include inertia devices such as accelerometers and gyros; magnetometers; surface mounted antennas; radars; GPS; and optical line of sight devices, do not provide an acceptable solution for many applications, particularly for gun-fired munitions and for all-weather and all environment scenarios. A robust onboard full angular orientation sensor solution, based on a scanning polarized reference source and a polarized geometrical cavity orientation sensor, is presented. The full position of the object, in the reference source coordinate system, is determined by combining range data obtained using established time-of-flight techniques, with the angular orientation information.

  14. Rover wheel charging on the lunar surface

    NASA Astrophysics Data System (ADS)

    Jackson, Telana L.; Farrell, William M.; Zimmerman, Michael I.

    2015-03-01

    The environment at the Moon is dynamic, with highly variable solar wind plasma conditions at the lunar dayside, terminator, and night side regions. Moving objects such as rover wheels will charge due to contact electrification with the surface, but the degree of charging is controlled by the local plasma environment. Using a dynamic charging model of a wheel, it is demonstrated herein that moving tires will tribocharge substantially when venturing into plasma-current starved regions such as polar craters or the lunar nightside. The surface regolith distribution and the overall effect on charge accumulation of grains cohesively sticking to the rover tire has been incorporated into the model. It is shown that dust sticking can limit the overall charge accumulated on the system. However charge dissipation times are greatly increased in shadowed regions and can present a potential hazard to astronauts and electrical systems performing extra-vehicular activities. We show that dissipation times change with wheel composition and overall system tribocharging is dependent upon wheel velocity.

  15. Competitive Dynamics in MSTd: A Mechanism for Robust Heading Perception Based on Optic Flow

    PubMed Central

    Layton, Oliver W.; Fajen, Brett R.

    2016-01-01

    Human heading perception based on optic flow is not only accurate, it is also remarkably robust and stable. These qualities are especially apparent when observers move through environments containing other moving objects, which introduce optic flow that is inconsistent with observer self-motion and therefore uninformative about heading direction. Moving objects may also occupy large portions of the visual field and occlude regions of the background optic flow that are most informative about heading perception. The fact that heading perception is biased by no more than a few degrees under such conditions attests to the robustness of the visual system and warrants further investigation. The aim of the present study was to investigate whether recurrent, competitive dynamics among MSTd neurons that serve to reduce uncertainty about heading over time offer a plausible mechanism for capturing the robustness of human heading perception. Simulations of existing heading models that do not contain competitive dynamics yield heading estimates that are far more erratic and unstable than human judgments. We present a dynamical model of primate visual areas V1, MT, and MSTd based on that of Layton, Mingolla, and Browning that is similar to the other models, except that the model includes recurrent interactions among model MSTd neurons. Competitive dynamics stabilize the model’s heading estimate over time, even when a moving object crosses the future path. Soft winner-take-all dynamics enhance units that code a heading direction consistent with the time history and suppress responses to transient changes to the optic flow field. Our findings support recurrent competitive temporal dynamics as a crucial mechanism underlying the robustness and stability of perception of heading. PMID:27341686

  16. A Functional Magnetic Resonance Imaging Assessment of Small Animals’ Phobia Using Virtual Reality as a Stimulus

    PubMed Central

    Rey, Beatriz; Rodriguez-Pujadas, Aina; Breton-Lopez, Juani; Barros-Loscertales, Alfonso; Baños, Rosa M; Botella, Cristina; Alcañiz, Mariano; Avila, Cesar

    2014-01-01

    Background To date, still images or videos of real animals have been used in functional magnetic resonance imaging protocols to evaluate the brain activations associated with small animals’ phobia. Objective The objective of our study was to evaluate the brain activations associated with small animals’ phobia through the use of virtual environments. This context will have the added benefit of allowing the subject to move and interact with the environment, giving the subject the illusion of being there. Methods We have analyzed the brain activation in a group of phobic people while they navigated in a virtual environment that included the small animals that were the object of their phobia. Results We have found brain activation mainly in the left occipital inferior lobe (P<.05 corrected, cluster size=36), related to the enhanced visual attention to the phobic stimuli; and in the superior frontal gyrus (P<.005 uncorrected, cluster size=13), which is an area that has been previously related to the feeling of self-awareness. Conclusions In our opinion, these results demonstrate that virtual stimulus can enhance brain activations consistent with previous studies with still images, but in an environment closer to the real situation the subject would face in their daily lives. PMID:25654753

  17. Characteristic sounds facilitate visual search.

    PubMed

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  18. Technical Report of the Use of a Novel Eye Tracking System to Measure Impairment Associated with Mild Traumatic Brain Injury

    PubMed Central

    2017-01-01

    This technical report details the results of an uncontrolled study of EyeGuide Focus, a 10-second concussion management tool which relies on eye tracking to determine the potential impairment of visual attention, an indicator often of mild traumatic brain injury (mTBI). Essentially, people who can visually keep steady and accurate attention on a moving object in their environment likely suffer from no impairment. However, if after a potential mTBI event, subjects cannot keep attention on a moving object in a normal way as demonstrated on their previous healthy baseline tests. This may indicate possible neurological impairment. Now deployed at multiple locations across the United States, Focus (EyeGuide, Lubbock, Texas, United States) to date, has recorded more than 4,000 test scores. Our data analysis of these results shows the promise of Focus as a low-cost, ocular-based impairment test for assessing potential neurological impairment caused by mTBI in subjects ages eight and older.  PMID:28630809

  19. Technical Report of the Use of a Novel Eye Tracking System to Measure Impairment Associated with Mild Traumatic Brain Injury.

    PubMed

    Kelly, Michael

    2017-05-15

    This technical report details the results of an uncontrolled study of EyeGuide Focus, a 10-second concussion management tool which relies on eye tracking to determine the potential impairment of visual attention, an indicator often of mild traumatic brain injury (mTBI). Essentially, people who can visually keep steady and accurate attention on a moving object in their environment likely suffer from no impairment. However, if after a potential mTBI event, subjects cannot keep attention on a moving object in a normal way as demonstrated on their previous healthy baseline tests. This may indicate possible neurological impairment. Now deployed at multiple locations across the United States, Focus (EyeGuide, Lubbock, Texas, United States) to date, has recorded more than 4,000 test scores. Our data analysis of these results shows the promise of Focus as a low-cost, ocular-based impairment test for assessing potential neurological impairment caused by mTBI in subjects ages eight and older.

  20. Neural and Behavioral Evidence for an Online Resetting Process in Visual Working Memory.

    PubMed

    Balaban, Halely; Luria, Roy

    2017-02-01

    Visual working memory (VWM) guides behavior by holding a set of active representations and modifying them according to changes in the environment. This updating process relies on a unique mapping between each VWM representation and an actual object in the environment. Here, we destroyed this mapping by either presenting a coherent object but then breaking it into independent parts or presenting an object but then abruptly replacing it with a different object. This allowed us to introduce the neural marker and behavioral consequence of an online resetting process in humans' VWM. Across seven experiments, we demonstrate that this resetting process involves abandoning the old VWM contents because they no longer correspond to the objects in the environment. Then, VWM encodes the novel information and reestablishes the correspondence between the new representations and the objects. The resetting process was marked by a unique neural signature: a sharp drop in the amplitude of the electrophysiological index of VWM contents (the contralateral delay activity), presumably indicating the loss of the existent object-to-representation mappings. This marker was missing when an updating process occurred. Moreover, when tracking moving items, VWM failed to detect salient changes in the object's shape when these changes occurred during the resetting process. This happened despite the object being fully visible, presumably because the mapping between the object and a VWM representation was lost. Importantly, we show that resetting, its neural marker, and the behavioral cost it entails, are specific to situations that involve a destruction of the objects-to-representations correspondence. Visual working memory (VWM) maintains task-relevant information in an online state. Previous studies showed that VWM representations are accessed and modified after changes in the environment. Here, we show that this updating process critically depends on an ongoing mapping between the representations and the objects in the environment. When this mapping breaks, VWM cannot access the old representations and instead resets. The novel resetting process that we introduce removes the existing representations instead of modifying them and this process is accompanied by a unique neural marker. During the resetting process, VWM was blind to salient changes in the object's shape. The resetting process highlights the flexibility of our cognitive system in handling the dynamic environment by abruptly abandoning irrelevant schemas. Copyright © 2017 the authors 0270-6474/17/371225-15$15.00/0.

  1. Describing a Robot's Workspace Using a Sequence of Views from a Moving Camera.

    PubMed

    Hong, T H; Shneier, M O

    1985-06-01

    This correspondence describes a method of building and maintaining a spatial respresentation for the workspace of a robot, using a sensor that moves about in the world. From the known camera position at which an image is obtained, and two-dimensional silhouettes of the image, a series of cones is projected to describe the possible positions of the objects in the space. When an object is seen from several viewpoints, the intersections of the cones constrain the position and size of the object. After several views have been processed, the representation of the object begins to resemble its true shape. At all times, the spatial representation contains the best guess at the true situation in the world with uncertainties in position and shape explicitly represented. An octree is used as the data structure for the representation. It not only provides a relatively compact representation, but also allows fast access to information and enables large parts of the workspace to be ignored. The purpose of constructing this representation is not so much to recognize objects as to describe the volumes in the workspace that are occupied and those that are empty. This enables trajectory planning to be carried out, and also provides a means of spatially indexing objects without needing to represent the objects at an extremely fine resolution. The spatial representation is one part of a more complex representation of the workspace used by the sensory system of a robot manipulator in understanding its environment.

  2. A standardized set of 3-D objects for virtual reality research and applications.

    PubMed

    Peeters, David

    2018-06-01

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.

  3. Assessing the performance of a motion tracking system based on optical joint transform correlation

    NASA Astrophysics Data System (ADS)

    Elbouz, M.; Alfalou, A.; Brosseau, C.; Ben Haj Yahia, N.; Alam, M. S.

    2015-08-01

    We present an optimized system specially designed for the tracking and recognition of moving subjects in a confined environment (such as an elderly remaining at home). In the first step of our study, we use a VanderLugt correlator (VLC) with an adapted pre-processing treatment of the input plane and a postprocessing of the correlation plane via a nonlinear function allowing us to make a robust decision. The second step is based on an optical joint transform correlation (JTC)-based system (NZ-NL-correlation JTC) for achieving improved detection and tracking of moving persons in a confined space. The proposed system has been found to have significantly superior discrimination and robustness capabilities allowing to detect an unknown target in an input scene and to determine the target's trajectory when this target is in motion. This system offers robust tracking performance of a moving target in several scenarios, such as rotational variation of input faces. Test results obtained using various real life video sequences show that the proposed system is particularly suitable for real-time detection and tracking of moving objects.

  4. The Pop out of Scene-Relative Object Movement against Retinal Motion Due to Self-Movement

    ERIC Educational Resources Information Center

    Rushton, Simon K.; Bradshaw, Mark F.; Warren, Paul A.

    2007-01-01

    An object that moves is spotted almost effortlessly; it "pops out." When the observer is stationary, a moving object is uniquely identified by retinal motion. This is not so when the observer is also moving; as the eye travels through space all scene objects change position relative to the eye producing a complicated field of retinal motion.…

  5. Optimal motion planning for collision avoidance of mobile robots in non-stationary environments

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1992-01-01

    An optimal control formulation of the problem of collision avoidance of mobile robots moving in general terrains containing moving obstacles is presented. A dynamic model of the mobile robot and the dynamic constraints are derived. Collision avoidance is guaranteed if the minimum distance between the robot and the object is nonzero. A nominal trajectory is assumed to be known from off-line planning. The main idea is to change the velocity along the nominal trajectory so that collisions are avoided. Time consistency with the nominal plan is desirable. A numerical solution of the optimization problem is obtained. A perturbation control type of approach is used to update the optimal plan. Simulation results verify the value of the proposed strategy.

  6. Multi-view video segmentation and tracking for video surveillance

    NASA Astrophysics Data System (ADS)

    Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj

    2009-05-01

    Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.

  7. The Jet Propulsion Laboratory shared control architecture and implementation

    NASA Technical Reports Server (NTRS)

    Backes, Paul G.; Hayati, Samad

    1990-01-01

    A hardware and software environment for shared control of telerobot task execution has been implemented. Modes of task execution range from fully teleoperated to fully autonomous as well as shared where hand controller inputs from the human operator are mixed with autonomous system inputs in real time. The objective of the shared control environment is to aid the telerobot operator during task execution by merging real-time operator control from hand controllers with autonomous control to simplify task execution for the operator. The operator is the principal command source and can assign as much autonomy for a task as desired. The shared control hardware environment consists of two PUMA 560 robots, two 6-axis force reflecting hand controllers, Universal Motor Controllers for each of the robots and hand controllers, a SUN4 computer, and VME chassis containing 68020 processors and input/output boards. The operator interface for shared control, the User Macro Interface (UMI), is a menu driven interface to design a task and assign the levels of teleoperated and autonomous control. The operator also sets up the system monitor which checks safety limits during task execution. Cartesian-space degrees of freedom for teleoperated and/or autonomous control inputs are selected within UMI as well as the weightings for the teleoperation and autonmous inputs. These are then used during task execution to determine the mix of teleoperation and autonomous inputs. Some of the autonomous control primitives available to the user are Joint-Guarded-Move, Cartesian-Guarded-Move, Move-To-Touch, Pin-Insertion/Removal, Door/Crank-Turn, Bolt-Turn, and Slide. The operator can execute a task using pure teleoperation or mix control execution from the autonomous primitives with teleoperated inputs. Presently the shared control environment supports single arm task execution. Work is presently underway to provide the shared control environment for dual arm control. Teleoperation during shared control is only Cartesian space control and no force-reflection is provided. Force-reflecting teleoperation and joint space operator inputs are planned extensions to the environment.

  8. Optical system for object detection and delineation in space

    NASA Astrophysics Data System (ADS)

    Handelman, Amir; Shwartz, Shoam; Donitza, Liad; Chaplanov, Loran

    2018-01-01

    Object recognition and delineation is an important task in many environments, such as in crime scenes and operating rooms. Marking evidence or surgical tools and attracting the attention of the surrounding staff to the marked objects can affect people's lives. We present an optical system comprising a camera, computer, and small laser projector that can detect and delineate objects in the environment. To prove the optical system's concept, we show that it can operate in a hypothetical crime scene in which a pistol is present and automatically recognize and segment it by various computer-vision algorithms. Based on such segmentation, the laser projector illuminates the actual boundaries of the pistol and thus allows the persons in the scene to comfortably locate and measure the pistol without holding any intermediator device, such as an augmented reality handheld device, glasses, or screens. Using additional optical devices, such as diffraction grating and a cylinder lens, the pistol size can be estimated. The exact location of the pistol in space remains static, even after its removal. Our optical system can be fixed or dynamically moved, making it suitable for various applications that require marking of objects in space.

  9. Shape-based human detection for threat assessment

    NASA Astrophysics Data System (ADS)

    Lee, Dah-Jye; Zhan, Pengcheng; Thomas, Aaron; Schoenberger, Robert B.

    2004-07-01

    Detection of intrusions for early threat assessment requires the capability of distinguishing whether the intrusion is a human, an animal, or other objects. Most low-cost security systems use simple electronic motion detection sensors to monitor motion or the location of objects within the perimeter. Although cost effective, these systems suffer from high rates of false alarm, especially when monitoring open environments. Any moving objects including animals can falsely trigger the security system. Other security systems that utilize video equipment require human interpretation of the scene in order to make real-time threat assessment. Shape-based human detection technique has been developed for accurate early threat assessments for open and remote environment. Potential threats are isolated from the static background scene using differential motion analysis and contours of the intruding objects are extracted for shape analysis. Contour points are simplified by removing redundant points connecting short and straight line segments and preserving only those with shape significance. Contours are represented in tangent space for comparison with shapes stored in database. Power cepstrum technique has been developed to search for the best matched contour in database and to distinguish a human from other objects from different viewing angles and distances.

  10. Laser-Based Trespassing Prediction in Restrictive Environments: A Linear Approach

    PubMed Central

    Cheein, Fernando Auat; Scaglia, Gustavo

    2012-01-01

    Stationary range laser sensors for intruder monitoring, restricted space violation detections and workspace determination are extensively used in risky environments. In this work we present a linear based approach for predicting the presence of moving agents before they trespass a laser-based restricted space. Our approach is based on the Taylor's series expansion of the detected objects' movements. The latter makes our proposal suitable for embedded applications. In the experimental results (carried out in different scenarios) presented herein, our proposal shows 100% of effectiveness in predicting trespassing situations. Several implementation results and statistics analysis showing the performance of our proposal are included in this work.

  11. Motion as a source of environmental information: a fresh view on biological motion computation by insect brains

    PubMed Central

    Egelhaaf, Martin; Kern, Roland; Lindemann, Jens Peter

    2014-01-01

    Despite their miniature brains insects, such as flies, bees and wasps, are able to navigate by highly erobatic flight maneuvers in cluttered environments. They rely on spatial information that is contained in the retinal motion patterns induced on the eyes while moving around (“optic flow”) to accomplish their extraordinary performance. Thereby, they employ an active flight and gaze strategy that separates rapid saccade-like turns from translatory flight phases where the gaze direction is kept largely constant. This behavioral strategy facilitates the processing of environmental information, because information about the distance of the animal to objects in the environment is only contained in the optic flow generated by translatory motion. However, motion detectors as are widespread in biological systems do not represent veridically the velocity of the optic flow vectors, but also reflect textural information about the environment. This characteristic has often been regarded as a limitation of a biological motion detection mechanism. In contrast, we conclude from analyses challenging insect movement detectors with image flow as generated during translatory locomotion through cluttered natural environments that this mechanism represents the contours of nearby objects. Contrast borders are a main carrier of functionally relevant object information in artificial and natural sceneries. The motion detection system thus segregates in a computationally parsimonious way the environment into behaviorally relevant nearby objects and—in many behavioral contexts—less relevant distant structures. Hence, by making use of an active flight and gaze strategy, insects are capable of performing extraordinarily well even with a computationally simple motion detection mechanism. PMID:25389392

  12. Motion as a source of environmental information: a fresh view on biological motion computation by insect brains.

    PubMed

    Egelhaaf, Martin; Kern, Roland; Lindemann, Jens Peter

    2014-01-01

    Despite their miniature brains insects, such as flies, bees and wasps, are able to navigate by highly erobatic flight maneuvers in cluttered environments. They rely on spatial information that is contained in the retinal motion patterns induced on the eyes while moving around ("optic flow") to accomplish their extraordinary performance. Thereby, they employ an active flight and gaze strategy that separates rapid saccade-like turns from translatory flight phases where the gaze direction is kept largely constant. This behavioral strategy facilitates the processing of environmental information, because information about the distance of the animal to objects in the environment is only contained in the optic flow generated by translatory motion. However, motion detectors as are widespread in biological systems do not represent veridically the velocity of the optic flow vectors, but also reflect textural information about the environment. This characteristic has often been regarded as a limitation of a biological motion detection mechanism. In contrast, we conclude from analyses challenging insect movement detectors with image flow as generated during translatory locomotion through cluttered natural environments that this mechanism represents the contours of nearby objects. Contrast borders are a main carrier of functionally relevant object information in artificial and natural sceneries. The motion detection system thus segregates in a computationally parsimonious way the environment into behaviorally relevant nearby objects and-in many behavioral contexts-less relevant distant structures. Hence, by making use of an active flight and gaze strategy, insects are capable of performing extraordinarily well even with a computationally simple motion detection mechanism.

  13. Motion detection, novelty filtering, and target tracking using an interferometric technique with GaAs phase conjugate mirror

    NASA Technical Reports Server (NTRS)

    Cheng, Li-Jen (Inventor); Liu, Tsuen-Hsi (Inventor)

    1991-01-01

    A method and apparatus for detecting and tracking moving objects in a noise environment cluttered with fast- and slow-moving objects and other time-varying background. A pair of phase conjugate light beams carrying the same spatial information commonly cancel each other out through an image subtraction process in a phase conjugate interferometer, wherein gratings are formed in a fast photorefractive phase conjugate mirror material. In the steady state, there is no output. When the optical path of one of the two phase conjugate beams is suddenly changed, the return beam loses its phase conjugate nature and the interferometer is out of balance, resulting in an observable output. The observable output lasts until the phase conjugate nature of the beam has recovered. The observable time of the output signal is roughly equal to the formation time of the grating. If the optical path changing time is slower than the formation time, the change of optical path becomes unobservable, because the index grating can follow the change. Thus, objects traveling at speeds which result in a path changing time which is slower than the formation time are not observable and do not clutter the output image view.

  14. Indoor Trajectory Tracking Scheme Based on Delaunay Triangulation and Heuristic Information in Wireless Sensor Networks.

    PubMed

    Qin, Junping; Sun, Shiwen; Deng, Qingxu; Liu, Limin; Tian, Yonghong

    2017-06-02

    Object tracking and detection is one of the most significant research areas for wireless sensor networks. Existing indoor trajectory tracking schemes in wireless sensor networks are based on continuous localization and moving object data mining. Indoor trajectory tracking based on the received signal strength indicator ( RSSI ) has received increased attention because it has low cost and requires no special infrastructure. However, RSSI tracking introduces uncertainty because of the inaccuracies of measurement instruments and the irregularities (unstable, multipath, diffraction) of wireless signal transmissions in indoor environments. Heuristic information includes some key factors for trajectory tracking procedures. This paper proposes a novel trajectory tracking scheme based on Delaunay triangulation and heuristic information (TTDH). In this scheme, the entire field is divided into a series of triangular regions. The common side of adjacent triangular regions is regarded as a regional boundary. Our scheme detects heuristic information related to a moving object's trajectory, including boundaries and triangular regions. Then, the trajectory is formed by means of a dynamic time-warping position-fingerprint-matching algorithm with heuristic information constraints. Field experiments show that the average error distance of our scheme is less than 1.5 m, and that error does not accumulate among the regions.

  15. Infrared Thermography Sensor for Temperature and Speed Measurement of Moving Material.

    PubMed

    Usamentiaga, Rubén; García, Daniel Fernando

    2017-05-18

    Infrared thermography offers significant advantages in monitoring the temperature of objects over time, but crucial aspects need to be addressed. Movements between the infrared camera and the inspected material seriously affect the accuracy of the calculated temperature. These movements can be the consequence of solid objects that are moved, molten metal poured, material on a conveyor belt, or just vibrations. This work proposes a solution for monitoring the temperature of material in these scenarios. In this work both real movements and vibrations are treated equally, proposing a unified solution for both problems. The three key steps of the proposed procedure are image rectification, motion estimation and motion compensation. Image rectification calculates a front-parallel projection of the image that simplifies the estimation and compensation of the movement. Motion estimation describes the movement using a mathematical model, and estimates the coefficients using robust methods adapted to infrared images. Motion is finally compensated for in order to produce the correct temperature time history of the monitored material regardless of the movement. The result is a robust sensor for temperature of moving material that can also be used to measure the speed of the material. Different experiments are carried out to validate the proposed method in laboratory and real environments. Results show excellent performance.

  16. Infrared Thermography Sensor for Temperature and Speed Measurement of Moving Material

    PubMed Central

    Usamentiaga, Rubén; García, Daniel Fernando

    2017-01-01

    Infrared thermography offers significant advantages in monitoring the temperature of objects over time, but crucial aspects need to be addressed. Movements between the infrared camera and the inspected material seriously affect the accuracy of the calculated temperature. These movements can be the consequence of solid objects that are moved, molten metal poured, material on a conveyor belt, or just vibrations. This work proposes a solution for monitoring the temperature of material in these scenarios. In this work both real movements and vibrations are treated equally, proposing a unified solution for both problems. The three key steps of the proposed procedure are image rectification, motion estimation and motion compensation. Image rectification calculates a front-parallel projection of the image that simplifies the estimation and compensation of the movement. Motion estimation describes the movement using a mathematical model, and estimates the coefficients using robust methods adapted to infrared images. Motion is finally compensated for in order to produce the correct temperature time history of the monitored material regardless of the movement. The result is a robust sensor for temperature of moving material that can also be used to measure the speed of the material. Different experiments are carried out to validate the proposed method in laboratory and real environments. Results show excellent performance. PMID:28524110

  17. Modulation of Visually Evoked Postural Responses by Contextual Visual, Haptic and Auditory Information: A ‘Virtual Reality Check’

    PubMed Central

    Meyer, Georg F.; Shao, Fei; White, Mark D.; Hopkins, Carl; Robotham, Antony J.

    2013-01-01

    Externally generated visual motion signals can cause the illusion of self-motion in space (vection) and corresponding visually evoked postural responses (VEPR). These VEPRs are not simple responses to optokinetic stimulation, but are modulated by the configuration of the environment. The aim of this paper is to explore what factors modulate VEPRs in a high quality virtual reality (VR) environment where real and virtual foreground objects served as static visual, auditory and haptic reference points. Data from four experiments on visually evoked postural responses show that: 1) visually evoked postural sway in the lateral direction is modulated by the presence of static anchor points that can be haptic, visual and auditory reference signals; 2) real objects and their matching virtual reality representations as visual anchors have different effects on postural sway; 3) visual motion in the anterior-posterior plane induces robust postural responses that are not modulated by the presence of reference signals or the reality of objects that can serve as visual anchors in the scene. We conclude that automatic postural responses for laterally moving visual stimuli are strongly influenced by the configuration and interpretation of the environment and draw on multisensory representations. Different postural responses were observed for real and virtual visual reference objects. On the basis that automatic visually evoked postural responses in high fidelity virtual environments should mimic those seen in real situations we propose to use the observed effect as a robust objective test for presence and fidelity in VR. PMID:23840760

  18. Emotional valence and contextual affordances flexibly shape approach-avoidance movements

    PubMed Central

    Saraiva, Ana Carolina; Schüür, Friederike; Bestmann, Sven

    2013-01-01

    Behavior is influenced by the emotional content—or valence—of stimuli in our environment. Positive stimuli facilitate approach, whereas negative stimuli facilitate defensive actions such as avoidance (flight) and attack (fight). Facilitation of approach or avoidance movements may also be influenced by whether it is the self that moves relative to a stimulus (self-reference) or the stimulus that moves relative to the self (object-reference), adding flexibility and context-dependence to behavior. Alternatively, facilitation of approach avoidance movements may happen in a pre-defined and muscle-specific way, whereby arm flexion is faster to approach positive (e.g., flexing the arm brings a stimulus closer) and arm extension faster to avoid negative stimuli (e.g., extending the arm moves the stimulus away). While this allows for relatively fast responses, it may compromise the flexibility offered by contextual influences. Here we asked under which conditions approach-avoidance actions are influenced by contextual factors (i.e., reference-frame). We manipulated the reference-frame in which actions occurred by asking participants to move a symbolic manikin (representing the self) toward or away from a positive or negative stimulus, and move a stimulus toward or away from the manikin. We also controlled for the type of movements used to approach or avoid in each reference. We show that the reference-frame influences approach-avoidance actions to emotional stimuli, but additionally we find muscle-specificity for negative stimuli in self-reference contexts. We speculate this muscle-specificity may be a fast and adaptive response to threatening stimuli. Our results confirm that approach-avoidance behavior is flexible and reference-frame dependent, but can be muscle-specific depending on the context and valence of the stimulus. Reference-frame and stimulus-evaluation are key factors in guiding approach-avoidance behavior toward emotional stimuli in our environment. PMID:24379794

  19. Analysis of plant soil seed banks and seed dispersal vectors: Its potential and limits for forensic investigations.

    PubMed

    Šumberová, Kateřina; Ducháček, Michal

    2017-01-01

    Plant seeds exhibit many species-specific traits, thus potentially being especially helpful for forensic investigations. Seeds of a broad range of plant species occur in soil seed banks of various habitats and may become attached in large quantities to moving objects. Although plant seeds are now routinely used as trace evidence in forensic practice, only scant information has been published on this topic in the scientific literature. Thus, the standard methods remain unknown to specialists in such botanical subjects as plant ecology and plant geography. These specialists, if made aware of the forensic uses of seeds, could help in development of new, more sophisticated approaches. We aim to bridge the gap between forensic analysts and botanists. Therefore, we explore the available literature and compare it with our own experiences to reveal both the potential and limits of soil seed bank and seed dispersal analysis in forensic investigations. We demonstrate that habitat-specific and thus relatively rare species are of the greatest forensic value. Overall species composition, in terms of species presence/absence and relative abundance can also provide important information. In particular, the ecological profiles of seeds found on any moving object can help us identify the types of environments through which the object had travelled. We discuss the applicability of this approach to various European environments, with the ability to compare seed samples with georeferenced vegetation databases being particularly promising for forensic investigations. We also explore the forensic limitations of soil seed bank and seed dispersal vector analyses. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. A kickball game for ankle rehabilitation by JAVA, JNI, and VRML

    NASA Astrophysics Data System (ADS)

    Choi, Hyungjeen; Ryu, Jeha; Lee, Chansu

    2004-03-01

    This paper presents development of a virtual environment that can be applied to the ankle rehabilitation procedure. We developed a virtual football stadium to intrigue a patient, where two degree of freedom (DOF) plate-shaped object is oriented to kick a ball falling from the sky in accordance with the data from the ankle's dorisflexion/plantarflexion and inversion/eversion motion on the moving platform of the K-Platform. This Kickball Game is implemented by Virtual Reality Modeling Language (VRML). To control virtual objects, data from the K-Platform are transmitted through the communication module implemented in C++. Java, Java Native Interface (JNI) and VRML plug-in are combined together so as to interface the communication module with the virtual environment by VRML. This game may be applied to the Active Range of Motion (AROM) exercise procedure that is one of the ankle rehabilitation procedures.

  1. Walking through doorways causes forgetting: environmental integration.

    PubMed

    Radvansky, Gabriel A; Tamplin, Andrea K; Krawietz, Sabine A

    2010-12-01

    Memory for objects declines when people move from one location to another (the location updating effect). However, it is unclear whether this is attributable to event model updating or to task demands. The focus here was on the degree of integration for probed-for information with the experienced environment. In prior research, the probes were verbal labels of visual objects. Experiment 1 assessed whether this was a consequence of an item-probe mismatch, as with transfer-appropriate processing. Visual probes were used to better coordinate what was seen with the nature of the memory probe. In Experiment 2, people received additional word pairs to remember, which were less well integrated with the environment, to assess whether the probed-for information needed to be well integrated. The results showed location updating effects in both cases. These data are consistent with an event cognition view that mental updating of a dynamic event disrupts memory.

  2. Characteristic sounds facilitate visual search

    PubMed Central

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2009-01-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing “meow” did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253

  3. Barrier Effects in Non-retinotopic Feature Attribution

    PubMed Central

    Aydin, Murat; Herzog, Michael H.; Öğmen, Haluk

    2011-01-01

    When objects move in the environment, their retinal images can undergo drastic changes and features of different objects can be inter-mixed in the retinal image. Notwithstanding these changes and ambiguities, the visual system is capable of establishing correctly feature-object relationships as well as maintaining individual identities of objects through space and time. Recently, by using a Ternus-Pikler display, we have shown that perceived motion correspondences serve as the medium for non-retinotopic attribution of features to objects. The purpose of the work reported in this manuscript was to assess whether perceived motion correspondences provide a sufficient condition for feature attribution. Our results show that the introduction of a static “barrier” stimulus can interfere with the feature attribution process. Our results also indicate that the barrier stops feature attribution based on interferences related to the feature attribution process itself rather than on mechanisms related to perceived motion. PMID:21767561

  4. Pleasant to the Touch: By Emulating Nature, Scientists Hope to Find Innovative New Uses for Soft Robotics in Health-Care Technology.

    PubMed

    Cianchetti, Matteo; Laschi, Cecilia

    2016-01-01

    Open your Internet browser and search for videos showing the most advanced humanoid robots. Look at how they move and walk. Observe their motion and their interaction with the environment (the ground, users, target objects). Now, search for a video of your favorite sports player. Despite the undoubtedly great achievements of modern robotics, it will become quite evident that a lot of work still remains.

  5. Effects of a Moving Distractor Object on Time-to-Contact Judgments

    ERIC Educational Resources Information Center

    Oberfeld, Daniel; Hecht, Heiko

    2008-01-01

    The effects of moving task-irrelevant objects on time-to-contact (TTC) judgments were examined in 5 experiments. Observers viewed a directly approaching target in the presence of a distractor object moving in parallel with the target. In Experiments 1 to 4, observers decided whether the target would have collided with them earlier or later than a…

  6. Perceptual impressions of causality are affected by common fate.

    PubMed

    White, Peter A

    2017-03-24

    Many studies of perceptual impressions of causality have used a stimulus in which a moving object (the launcher) contacts a stationary object (the target) and the latter then moves off. Such stimuli give rise to an impression that the launcher makes the target move. In the present experiments, instead of a single target object, an array of four vertically aligned objects was used. The launcher contacted none of them, but stopped at a point between the two central objects. The four objects then moved with similar motion properties, exhibiting the Gestalt property of common fate. Strong impressions of causality were reported for this stimulus. It is argued that the array of four objects was perceived, by the likelihood principle, as a single object with some parts unseen, that the launcher was perceived as contacting one of the unseen parts of this object, and that the causal impression resulted from that. Supporting that argument, stimuli in which kinematic features were manipulated so as to weaken or eliminate common fate yielded weaker impressions of causality.

  7. Moving vehicles segmentation based on Gaussian motion model

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Fang, Xiang Z.; Lin, Wei Y.

    2005-07-01

    Moving objects segmentation is a challenge in computer vision. This paper focuses on the segmentation of moving vehicles in dynamic scene. We analyses the psychology of human vision and present a framework for segmenting moving vehicles in the highway. The proposed framework consists of two parts. Firstly, we propose an adaptive background update method in which the background is updated according to the change of illumination conditions and thus can adapt to the change of illumination sensitively. Secondly, we construct a Gaussian motion model to segment moving vehicles, in which the motion vectors of the moving pixels are modeled as a Gaussian model and an on-line EM algorithm is used to update the model. The Gaussian distribution of the adaptive model is elevated to determine which moving vectors result from moving vehicles and which from other moving objects such as waving trees. Finally, the pixels with motion vector result from the moving vehicles are segmented. Experimental results of several typical scenes show that the proposed model can detect the moving vehicles correctly and is immune from influence of the moving objects caused by the waving trees and the vibration of camera.

  8. Moving Object Localization Based on UHF RFID Phase and Laser Clustering

    PubMed Central

    Fu, Yulu; Wang, Changlong; Liang, Gaoli; Zhang, Hua; Ur Rehman, Shafiq

    2018-01-01

    RFID (Radio Frequency Identification) offers a way to identify objects without any contact. However, positioning accuracy is limited since RFID neither provides distance nor bearing information about the tag. This paper proposes a new and innovative approach for the localization of moving object using a particle filter by incorporating RFID phase and laser-based clustering from 2d laser range data. First of all, we calculate phase-based velocity of the moving object based on RFID phase difference. Meanwhile, we separate laser range data into different clusters, and compute the distance-based velocity and moving direction of these clusters. We then compute and analyze the similarity between two velocities, and select K clusters having the best similarity score. We predict the particles according to the velocity and moving direction of laser clusters. Finally, we update the weights of the particles based on K clusters and achieve the localization of moving objects. The feasibility of this approach is validated on a Scitos G5 service robot and the results prove that we have successfully achieved a localization accuracy up to 0.25 m. PMID:29522458

  9. Integration across Time Determines Path Deviation Discrimination for Moving Objects

    PubMed Central

    Whitaker, David; Levi, Dennis M.; Kennedy, Graeme J.

    2008-01-01

    Background Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects–a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat. Methodology/Principal Findings Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a ‘scale invariant’ model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds. Conclusions/Significance Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects. PMID:18414653

  10. Sensory prediction on a whiskered robot: a tactile analogy to “optical flow”

    PubMed Central

    Schroeder, Christopher L.; Hartmann, Mitra J. Z.

    2012-01-01

    When an animal moves an array of sensors (e.g., the hand, the eye) through the environment, spatial and temporal gradients of sensory data are related by the velocity of the moving sensory array. In vision, the relationship between spatial and temporal brightness gradients is quantified in the “optical flow” equation. In the present work, we suggest an analog to optical flow for the rodent vibrissal (whisker) array, in which the perceptual intensity that “flows” over the array is bending moment. Changes in bending moment are directly related to radial object distance, defined as the distance between the base of a whisker and the point of contact with the object. Using both simulations and a 1×5 array (row) of artificial whiskers, we demonstrate that local object curvature can be estimated based on differences in radial distance across the array. We then develop two algorithms, both based on tactile flow, to predict the future contact points that will be obtained as the whisker array translates along the object. The translation of the robotic whisker array represents the rat's head velocity. The first algorithm uses a calculation of the local object slope, while the second uses a calculation of the local object curvature. Both algorithms successfully predict future contact points for simple surfaces. The algorithm based on curvature was found to more accurately predict future contact points as surfaces became more irregular. We quantify the inter-related effects of whisker spacing and the object's spatial frequencies, and examine the issues that arise in the presence of real-world noise, friction, and slip. PMID:23097641

  11. Sensory prediction on a whiskered robot: a tactile analogy to "optical flow".

    PubMed

    Schroeder, Christopher L; Hartmann, Mitra J Z

    2012-01-01

    When an animal moves an array of sensors (e.g., the hand, the eye) through the environment, spatial and temporal gradients of sensory data are related by the velocity of the moving sensory array. In vision, the relationship between spatial and temporal brightness gradients is quantified in the "optical flow" equation. In the present work, we suggest an analog to optical flow for the rodent vibrissal (whisker) array, in which the perceptual intensity that "flows" over the array is bending moment. Changes in bending moment are directly related to radial object distance, defined as the distance between the base of a whisker and the point of contact with the object. Using both simulations and a 1×5 array (row) of artificial whiskers, we demonstrate that local object curvature can be estimated based on differences in radial distance across the array. We then develop two algorithms, both based on tactile flow, to predict the future contact points that will be obtained as the whisker array translates along the object. The translation of the robotic whisker array represents the rat's head velocity. The first algorithm uses a calculation of the local object slope, while the second uses a calculation of the local object curvature. Both algorithms successfully predict future contact points for simple surfaces. The algorithm based on curvature was found to more accurately predict future contact points as surfaces became more irregular. We quantify the inter-related effects of whisker spacing and the object's spatial frequencies, and examine the issues that arise in the presence of real-world noise, friction, and slip.

  12. A functional magnetic resonance imaging assessment of small animals' phobia using virtual reality as a stimulus.

    PubMed

    Clemente, Miriam; Rey, Beatriz; Rodriguez-Pujadas, Aina; Breton-Lopez, Juani; Barros-Loscertales, Alfonso; Baños, Rosa M; Botella, Cristina; Alcañiz, Mariano; Avila, Cesar

    2014-06-27

    To date, still images or videos of real animals have been used in functional magnetic resonance imaging protocols to evaluate the brain activations associated with small animals' phobia. The objective of our study was to evaluate the brain activations associated with small animals' phobia through the use of virtual environments. This context will have the added benefit of allowing the subject to move and interact with the environment, giving the subject the illusion of being there. We have analyzed the brain activation in a group of phobic people while they navigated in a virtual environment that included the small animals that were the object of their phobia. We have found brain activation mainly in the left occipital inferior lobe (P<.05 corrected, cluster size=36), related to the enhanced visual attention to the phobic stimuli; and in the superior frontal gyrus (P<.005 uncorrected, cluster size=13), which is an area that has been previously related to the feeling of self-awareness. In our opinion, these results demonstrate that virtual stimulus can enhance brain activations consistent with previous studies with still images, but in an environment closer to the real situation the subject would face in their daily lives.

  13. Come together, right now: dynamic overwriting of an object's history through common fate.

    PubMed

    Luria, Roy; Vogel, Edward K

    2014-08-01

    The objects around us constantly move and interact, and the perceptual system needs to monitor on-line these interactions and to update the object's status accordingly. Gestalt grouping principles, such as proximity and common fate, play a fundamental role in how we perceive and group these objects. Here, we investigated situations in which the initial object representation as a separate item was updated by a subsequent Gestalt grouping cue (i.e., proximity or common fate). We used a version of the color change detection paradigm, in which the objects started to move separately, then met and stayed stationary, or moved separately, met, and then continued to move together. We monitored the object representations on-line using the contralateral delay activity (CDA; an ERP component indicative of the number of maintained objects), during their movement, and after the objects disappeared and became working memory representations. The results demonstrated that the objects' representations (as indicated by the CDA amplitude) persisted as being separate, even after a Gestalt proximity cue (when the objects "met" and remained stationary on the same position). Only a strong common fate Gestalt cue (when the objects not just met but also moved together) was able to override the objects' initial separate status, creating an integrated representation. These results challenge the view that Gestalt principles cause reflexive grouping. Instead, the object initial representation plays an important role that can override even powerful grouping cues.

  14. Strategies to Evaluate the Visibility Along AN Indoor Path in a Point Cloud Representation

    NASA Astrophysics Data System (ADS)

    Grasso, N.; Verbree, E.; Zlatanova, S.; Piras, M.

    2017-09-01

    Many research works have been oriented to the formulation of different algorithms for estimating the paths in indoor environments from three-dimensional representations of space. The architectural configuration, the actions that take place within it, and the location of some objects in the space influence the paths along which is it possible to move, as they may cause visibility problems. To overcome the visibility issue, different methods have been proposed which allow to identify the visible areas and from a certain point of view, but often they do not take into account the user's visual perception of the environment and not allow estimating how much may be complicated to follow a certain path. In the field of space syntax and cognitive science, it has been attempted to describe the characteristics of a building or an urban environment by the isovists and visibility graphs methods; some numerical properties of these representations allow to describe the space as for how it is perceived by a user. However, most of these studies are directed to analyze the environment in a two-dimensional space. In this paper we propose a method to evaluate in a quantitative way the complexity of a certain path within an environment represented by a three-dimensional point cloud, by the combination of some of the previously mentioned techniques, considering the space visible from a certain point of view, depending on the moving agent (pedestrian , people in wheelchairs, UAV, UGV, robot).

  15. Research on measurement method of optical camouflage effect of moving object

    NASA Astrophysics Data System (ADS)

    Wang, Juntang; Xu, Weidong; Qu, Yang; Cui, Guangzhen

    2016-10-01

    Camouflage effectiveness measurement as an important part of the camouflage technology, which testing and measuring the camouflage effect of the target and the performance of the camouflage equipment according to the tactical and technical requirements. The camouflage effectiveness measurement of current optical band is mainly aimed at the static target which could not objectively reflect the dynamic camouflage effect of the moving target. This paper synthetical used technology of dynamic object detection and camouflage effect detection, the digital camouflage of the moving object as the research object, the adaptive background update algorithm of Surendra was improved, a method of optical camouflage effect detection using Lab-color space in the detection of moving-object was presented. The binary image of moving object is extracted by this measurement technology, in the sequence diagram, the characteristic parameters such as the degree of dispersion, eccentricity, complexity and moment invariants are constructed to construct the feature vector space. The Euclidean distance of moving target which through digital camouflage was calculated, the results show that the average Euclidean distance of 375 frames was 189.45, which indicated that the degree of dispersion, eccentricity, complexity and moment invariants of the digital camouflage graphics has a great difference with the moving target which not spray digital camouflage. The measurement results showed that the camouflage effect was good. Meanwhile with the performance evaluation module, the correlation coefficient of the dynamic target image range 0.1275 from 0.0035, and presented some ups and down. Under the dynamic condition, the adaptability of target and background was reflected. In view of the existing infrared camouflage technology, the next step, we want to carry out the camouflage effect measurement technology of the moving target based on infrared band.

  16. Map generation in unknown environments by AUKF-SLAM using line segment-type and point-type landmarks

    NASA Astrophysics Data System (ADS)

    Nishihta, Sho; Maeyama, Shoichi; Watanebe, Keigo

    2018-02-01

    Recently, autonomous mobile robots that collect information at disaster sites are being developed. Since it is difficult to obtain maps in advance in disaster sites, the robots being capable of autonomous movement under unknown environments are required. For this objective, the robots have to build maps, as well as the estimation of self-location. This is called a SLAM problem. In particular, AUKF-SLAM which uses corners in the environment as point-type landmarks has been developed as a solution method so far. However, when the robots move in an environment like a corridor consisting of few point-type features, the accuracy of self-location estimated by the landmark is decreased and it causes some distortions in the map. In this research, we propose AUKF-SLAM which uses walls in the environment as a line segment-type landmark. We demonstrate that the robot can generate maps in unknown environment by AUKF-SLAM, using line segment-type and point-type landmarks.

  17. Swimming droplets driven by a surface wave

    PubMed Central

    Ebata, Hiroyuki; Sano, Masaki

    2015-01-01

    Self-propelling motion is ubiquitous for soft active objects such as crawling cells, active filaments, and liquid droplets moving on surfaces. Deformation and energy dissipation are required for self-propulsion of both living and non-living matter. From the perspective of physics, searching for universal laws of self-propelled motions in a dissipative environment is worthwhile, regardless of the objects' details. In this article, we propose a simple experimental system that demonstrates spontaneous migration of a droplet under uniform mechanical agitation. As we vary control parameters, spontaneous symmetry breaking occurs sequentially, and cascades of bifurcations of the motion arise. Equations describing deformable particles and hydrodynamic simulations successfully describe all of the observed motions. This system should enable us to improve our understanding of spontaneous motions of self-propelled objects. PMID:25708871

  18. Swimming droplets driven by a surface wave

    NASA Astrophysics Data System (ADS)

    Ebata, Hiroyuki; Sano, Masaki

    2015-02-01

    Self-propelling motion is ubiquitous for soft active objects such as crawling cells, active filaments, and liquid droplets moving on surfaces. Deformation and energy dissipation are required for self-propulsion of both living and non-living matter. From the perspective of physics, searching for universal laws of self-propelled motions in a dissipative environment is worthwhile, regardless of the objects' details. In this article, we propose a simple experimental system that demonstrates spontaneous migration of a droplet under uniform mechanical agitation. As we vary control parameters, spontaneous symmetry breaking occurs sequentially, and cascades of bifurcations of the motion arise. Equations describing deformable particles and hydrodynamic simulations successfully describe all of the observed motions. This system should enable us to improve our understanding of spontaneous motions of self-propelled objects.

  19. Self-Learning Embedded System for Object Identification in Intelligent Infrastructure Sensors.

    PubMed

    Villaverde, Monica; Perez, David; Moreno, Felix

    2015-11-17

    The emergence of new horizons in the field of travel assistant management leads to the development of cutting-edge systems focused on improving the existing ones. Moreover, new opportunities are being also presented since systems trend to be more reliable and autonomous. In this paper, a self-learning embedded system for object identification based on adaptive-cooperative dynamic approaches is presented for intelligent sensor's infrastructures. The proposed system is able to detect and identify moving objects using a dynamic decision tree. Consequently, it combines machine learning algorithms and cooperative strategies in order to make the system more adaptive to changing environments. Therefore, the proposed system may be very useful for many applications like shadow tolls since several types of vehicles may be distinguished, parking optimization systems, improved traffic conditions systems, etc.

  20. Detection of Moving Targets Using Soliton Resonance Effect

    NASA Technical Reports Server (NTRS)

    Kulikov, Igor K.; Zak, Michail

    2013-01-01

    The objective of this research was to develop a fundamentally new method for detecting hidden moving targets within noisy and cluttered data-streams using a novel "soliton resonance" effect in nonlinear dynamical systems. The technique uses an inhomogeneous Korteweg de Vries (KdV) equation containing moving-target information. Solution of the KdV equation will describe a soliton propagating with the same kinematic characteristics as the target. The approach uses the time-dependent data stream obtained with a sensor in form of the "forcing function," which is incorporated in an inhomogeneous KdV equation. When a hidden moving target (which in many ways resembles a soliton) encounters the natural "probe" soliton solution of the KdV equation, a strong resonance phenomenon results that makes the location and motion of the target apparent. Soliton resonance method will amplify the moving target signal, suppressing the noise. The method will be a very effective tool for locating and identifying diverse, highly dynamic targets with ill-defined characteristics in a noisy environment. The soliton resonance method for the detection of moving targets was developed in one and two dimensions. Computer simulations proved that the method could be used for detection of singe point-like targets moving with constant velocities and accelerations in 1D and along straight lines or curved trajectories in 2D. The method also allows estimation of the kinematic characteristics of moving targets, and reconstruction of target trajectories in 2D. The method could be very effective for target detection in the presence of clutter and for the case of target obscurations.

  1. Toward a national animal telemetry network for aquatic observations in the United States

    USGS Publications Warehouse

    Block, Barbara A.; Holbrook, Christopher; Simmons, Samantha E; Holland, Kim N; Ault, Jerald S.; Costa, Daniel P.; Mate, Bruce R; Seitz, Andrew C.; Arendt, Michael D.; Payne, John; Mahmoudi, Behzad; Moore, Peter L.; Price, James; J. J. Levenson,; Wilson, Doug; Kochevar, Randall E

    2016-01-01

    Animal telemetry is the science of elucidating the movements and behavior of animals in relation to their environment or habitat. Here, we focus on telemetry of aquatic species (marine mammals, sharks, fish, sea birds and turtles) and so are concerned with animal movements and behavior as they move through and above the world’s oceans, coastal rivers, estuaries and great lakes. Animal telemetry devices (“tags”) yield detailed data regarding animal responses to the coupled ocean–atmosphere and physical environment through which they are moving. Animal telemetry has matured and we describe a developing US Animal Telemetry Network (ATN) observing system that monitors aquatic life on a range of temporal and spatial scales that will yield both short- and long-term benefits, fill oceanographic observing and knowledge gaps and advance many of the U.S. National Ocean Policy Priority Objectives. ATN has the potential to create a huge impact for the ocean observing activities undertaken by the U.S. Integrated Ocean Observing System (IOOS) and become a model for establishing additional national-level telemetry networks worldwide.

  2. IntelliTable: Inclusively-Designed Furniture with Robotic Capabilities.

    PubMed

    Prescott, Tony J; Conran, Sebastian; Mitchinson, Ben; Cudd, Peter

    2017-01-01

    IntelliTable is a new proof-of-principle assistive technology system with robotic capabilities in the form of an elegant universal cantilever table able to move around by itself, or under user control. We describe the design and current capabilities of the table and the human-centered design methodology used in its development and initial evaluation. The IntelliTable study has delivered robotic platform programmed by a smartphone that can navigate around a typical home or care environment, avoiding obstacles, and positioning itself at the user's command. It can also be configured to navigate itself to pre-ordained places positions within an environment using ceiling tracking, responsive optical guidance and object-based sonar navigation.

  3. Variational analysis of temperature and moisture advection in a severe storm environment

    NASA Technical Reports Server (NTRS)

    Mcfarland, M. J.; Sasaki, Y. K.

    1977-01-01

    Horizontal wind components, potential temperature, and mixing ratio fields associated with a severe storm environment in the south central United States were objectively analyzed from synoptic upper air observations with a nonhomogeneous anisotropic weighting function. The particular case study discussed here is the tornado producting squall line which moved through eastern Oklahoma 26 May 1973. The synoptic situation which preceded squall line development was cyclogenesis and frontogenesis in the lee-of-mountain trough, which produced a well-defined surface dry line (or dew point front) and a pronounced mid-level dry air intrusion. It is shown that the intrusion was also characterized by warm air, with a lapse rate approaching the dry adiabatic.

  4. A Comparison of Measures of Boldness and Their Relationships to Survival in Young Fish

    PubMed Central

    White, James R.; Meekan, Mark G.; McCormick, Mark I.; Ferrari, Maud C. O.

    2013-01-01

    Boldness is the propensity of an animal to engage in risky behavior. Many variations of novel-object or novel-environment tests have been used to quantify the boldness of animals, although the relationship between test outcomes has rarely been investigated. Furthermore, the relationship of outcomes to any ecological aspect of fitness is generally assumed, rather than measured directly. Our study is the first to compare how the outcomes of the same test of boldness differ among observers and how different tests of boldness relate to the survival of individuals in the field. Newly-metamorphosed lemon damselfish, Pomacentrus moluccensis, were placed onto replicate patches of natural habitat. Individual behavior was quantified using four tests (composed of a total of 12 different measures of behavior): latency to enter a novel environment, activity in a novel environment, and reactions to threatening and benign novel objects. After behavior was quantified, survival was monitored for two days during which time fish were exposed to natural predators. Variation among observers was low for most of the 12 measures, except distance moved and the threat test (reaction to probe thrust), which displayed unacceptable amounts of inter-observer variation. Overall, the results of the behavioral tests suggested that novel environment and novel object tests quantified similar behaviors, yet these behavioral measures were not interchangeable. Multiple measures of behavior within the context of novel environment or object tests were the most robust way to assess boldness and these measures have a complex relationship with survivorship of young fish in the field. Body size and distance ventured from shelter were the only variables that had a direct and positive relationship with survival. PMID:23874804

  5. A comparison of measures of boldness and their relationships to survival in young fish.

    PubMed

    White, James R; Meekan, Mark G; McCormick, Mark I; Ferrari, Maud C O

    2013-01-01

    Boldness is the propensity of an animal to engage in risky behavior. Many variations of novel-object or novel-environment tests have been used to quantify the boldness of animals, although the relationship between test outcomes has rarely been investigated. Furthermore, the relationship of outcomes to any ecological aspect of fitness is generally assumed, rather than measured directly. Our study is the first to compare how the outcomes of the same test of boldness differ among observers and how different tests of boldness relate to the survival of individuals in the field. Newly-metamorphosed lemon damselfish, Pomacentrus moluccensis, were placed onto replicate patches of natural habitat. Individual behavior was quantified using four tests (composed of a total of 12 different measures of behavior): latency to enter a novel environment, activity in a novel environment, and reactions to threatening and benign novel objects. After behavior was quantified, survival was monitored for two days during which time fish were exposed to natural predators. Variation among observers was low for most of the 12 measures, except distance moved and the threat test (reaction to probe thrust), which displayed unacceptable amounts of inter-observer variation. Overall, the results of the behavioral tests suggested that novel environment and novel object tests quantified similar behaviors, yet these behavioral measures were not interchangeable. Multiple measures of behavior within the context of novel environment or object tests were the most robust way to assess boldness and these measures have a complex relationship with survivorship of young fish in the field. Body size and distance ventured from shelter were the only variables that had a direct and positive relationship with survival.

  6. Detection and 3d Modelling of Vehicles from Terrestrial Stereo Image Pairs

    NASA Astrophysics Data System (ADS)

    Coenen, M.; Rottensteiner, F.; Heipke, C.

    2017-05-01

    The detection and pose estimation of vehicles plays an important role for automated and autonomous moving objects e.g. in autonomous driving environments. We tackle that problem on the basis of street level stereo images, obtained from a moving vehicle. Processing every stereo pair individually, our approach is divided into two subsequent steps: the vehicle detection and the modelling step. For the detection, we make use of the 3D stereo information and incorporate geometric assumptions on vehicle inherent properties in a firstly applied generic 3D object detection. By combining our generic detection approach with a state of the art vehicle detector, we are able to achieve satisfying detection results with values for completeness and correctness up to more than 86%. By fitting an object specific vehicle model into the vehicle detections, we are able to reconstruct the vehicles in 3D and to derive pose estimations as well as shape parameters for each vehicle. To deal with the intra-class variability of vehicles, we make use of a deformable 3D active shape model learned from 3D CAD vehicle data in our model fitting approach. While we achieve encouraging values up to 67.2% for correct position estimations, we are facing larger problems concerning the orientation estimation. The evaluation is done by using the object detection and orientation estimation benchmark of the KITTI dataset (Geiger et al., 2012).

  7. Modeling and query the uncertainty of network constrained moving objects based on RFID data

    NASA Astrophysics Data System (ADS)

    Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie

    2007-06-01

    The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.

  8. Model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald

    1992-01-01

    This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.

  9. Contextual effects on motion perception and smooth pursuit eye movements.

    PubMed

    Spering, Miriam; Gegenfurtner, Karl R

    2008-08-15

    Smooth pursuit eye movements are continuous, slow rotations of the eyes that allow us to follow the motion of a visual object of interest. These movements are closely related to sensory inputs from the visual motion processing system. To track a moving object in the natural environment, its motion first has to be segregated from the motion signals provided by surrounding stimuli. Here, we review experiments on the effect of the visual context on motion processing with a focus on the relationship between motion perception and smooth pursuit eye movements. While perception and pursuit are closely linked, we show that they can behave quite distinctly when required by the visual context.

  10. Integrating obstacle avoidance, global path planning, visual cue detection, and landmark triangulation in a mobile robot

    NASA Astrophysics Data System (ADS)

    Kortenkamp, David; Huber, Marcus J.; Congdon, Clare B.; Huffman, Scott B.; Bidlack, Clint R.; Cohen, Charles J.; Koss, Frank V.; Raschke, Ulrich; Weymouth, Terry E.

    1993-05-01

    This paper describes the design and implementation of an integrated system for combining obstacle avoidance, path planning, landmark detection and position triangulation. Such an integrated system allows the robot to move from place to place in an environment, avoiding obstacles and planning its way out of traps, while maintaining its position and orientation using distinctive landmarks. The task the robot performs is to search a 22 m X 22 m arena for 10 distinctive objects, visiting each object in turn. This same task was recently performed by a dozen different robots at a competition in which the robot described in this paper finished first.

  11. Evaluation of ADCP apparent bed load velocity in a large sand-bed river: Moving versus stationary boat conditions

    USGS Publications Warehouse

    Jamieson, E.C.; Rennie, C.D.; Jacobson, R.B.; Townsend, R.D.

    2011-01-01

    Detailed mapping of bathymetry and apparent bed load velocity using a boat-mounted acoustic Doppler current profiler (ADCP) was carried out along a 388-m section of the lower Missouri River near Columbia, Missouri. Sampling transects (moving boat) were completed at 5- and 20-m spacing along the study section. Stationary (fixed-boat) measurements were made by maintaining constant boat position over a target point where the position of the boat did not deviate more than 3 m in any direction. For each transect and stationary measurement, apparent bed load velocity (vb) was estimated using ADCP bottom tracking data and high precision real-time kinematic (RTK) global positioning system (GPS). The principal objectives of this research are to (1) determine whether boat motion introduces a bias in apparent bed load velocity measurements; and (2) evaluate the reliability of ADCP bed velocity measurements for a range of sediment transport environments. Results indicate that both high transport (vb>0.6 m/s) and moving-boat conditions (for both high and low transport environments) increase the relative variability in estimates of mean bed velocity. Despite this, the spatially dense single-transect measurements were capable of producing detailed bed velocity maps that correspond closely with the expected pattern of sediment transport over large dunes. ?? 2011 American Society of Civil Engineers.

  12. Racial Differences in the Effects of Neighborhood Disadvantage on Residential Mobility in Later Life

    PubMed Central

    Riley, Alicia; Cagney, Kathleen A.

    2016-01-01

    Objectives: Past research on the residential mobility of older adults has focused on individual-level factors and life course events. Less attention has been paid to the role of the residential environment in explaining residential mobility in older adults. We sought to understand whether neighborhood disadvantage had predictive utility in explaining residential relocation patterns, and whether associations differed between Whites and non-Whites. Method: Data are from the National Social Life, Health and Aging Project, a nationally representative sample of community-dwelling older adults. Neighborhoods were defined at the census tract level. Local movers (different census tract, same county) and distant movers (different county) were compared with stayers. Results: After adjusting for individual-level factors, neighborhood disadvantage increased the likelihood of a local move, regardless of race/ethnicity. For non-Whites, higher neighborhood disadvantage decreased the likelihood of a distant move. Among local movers, Blacks and Latinos were less likely to improve neighborhood quality than Whites. Discussion: Neighborhood disadvantage may promote local mobility by undermining person–environment fit. Racial differences in access to better neighborhoods persist in later life. Future research should explore how older adults optimize person–environment fit in the face of neighborhood disadvantage when the possibility of relocation to a better neighborhood may be restricted. PMID:27257227

  13. Effect of a moving optical environment on the subjective median.

    DOT National Transportation Integrated Search

    1971-04-01

    The placement of a point in the median vertical plane under the influence of a moving optical environment was tested in 12 subjects. It was found that the median plane was displaced in the same direction as the movement of the visual environment when...

  14. A Rotatable Quality Control Phantom for Evaluating the Performance of Flat Panel Detectors in Imaging Moving Objects.

    PubMed

    Haga, Yoshihiro; Chida, Koichi; Inaba, Yohei; Kaga, Yuji; Meguro, Taiichiro; Zuguchi, Masayuki

    2016-02-01

    As the use of diagnostic X-ray equipment with flat panel detectors (FPDs) has increased, so has the importance of proper management of FPD systems. To ensure quality control (QC) of FPD system, an easy method for evaluating FPD imaging performance for both stationary and moving objects is required. Until now, simple rotatable QC phantoms have not been available for the easy evaluation of the performance (spatial resolution and dynamic range) of FPD in imaging moving objects. We developed a QC phantom for this purpose. It consists of three thicknesses of copper and a rotatable test pattern of piano wires of various diameters. Initial tests confirmed its stable performance. Our moving phantom is very useful for QC of FPD images of moving objects because it enables visual evaluation of image performance (spatial resolution and dynamic range) easily.

  15. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1991-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.

  16. Putting a Twist on Inquiry

    ERIC Educational Resources Information Center

    Kemp, Andrew

    2005-01-01

    Everything moves. Even apparently stationary objects such as houses, roads, or mountains are moving because they sit on a spinning planet orbiting the Sun. Not surprisingly, the concepts of motion and the forces that affect moving objects are an integral part of the middle school science curriculum. However, middle school students are often taught…

  17. The Relativistic Wave Vector

    ERIC Educational Resources Information Center

    Houlrik, Jens Madsen

    2009-01-01

    The Lorentz transformation applies directly to the kinematics of moving particles viewed as geometric points. Wave propagation, on the other hand, involves moving planes which are extended objects defined by simultaneity. By treating a plane wave as a geometric object moving at the phase velocity, novel results are obtained that illustrate the…

  18. ALLFlight: detection of moving objects in IR and ladar images

    NASA Astrophysics Data System (ADS)

    Doehler, H.-U.; Peinecke, Niklas; Lueken, Thomas; Schmerwitz, Sven

    2013-05-01

    Supporting a helicopter pilot during landing and takeoff in degraded visual environment (DVE) is one of the challenges within DLR's project ALLFlight (Assisted Low Level Flight and Landing on Unprepared Landing Sites). Different types of sensors (TV, Infrared, mmW radar and laser radar) are mounted onto DLR's research helicopter FHS (flying helicopter simulator) for gathering different sensor data of the surrounding world. A high performance computer cluster architecture acquires and fuses all the information to get one single comprehensive description of the outside situation. While both TV and IR cameras deliver images with frame rates of 25 Hz or 30 Hz, Ladar and mmW radar provide georeferenced sensor data with only 2 Hz or even less. Therefore, it takes several seconds to detect or even track potential moving obstacle candidates in mmW or Ladar sequences. Especially if the helicopter is flying with higher speed, it is very important to minimize the detection time of obstacles in order to initiate a re-planning of the helicopter's mission timely. Applying feature extraction algorithms on IR images in combination with data fusion algorithms of extracted features and Ladar data can decrease the detection time appreciably. Based on real data from flight tests, the paper describes applied feature extraction methods for moving object detection, as well as data fusion techniques for combining features from TV/IR and Ladar data.

  19. A study on characteristics of values in migration to the countryside

    NASA Astrophysics Data System (ADS)

    Ohashi, Sachiko; Yuhara, Asako; Kaminaga, Nozomi; Takamori, Shuji

    National land policy should be examined in consideration of diversification of values and life-style today. Therefore, in order to implement infrastructure management with great satisfaction, we carried out a survey and analyses of the values of people who prefer the countryside. The survey was carried out in Ono-machi, Fukushima, Nichinan-cho, Tottori and Tarumizu-shi, Kagoshima. The results reveal values of people who moved into the countryside. When they consider the move, they make much of quality of the work and spare time. When they choose where they move into, they consider living environment and local communities in particular. After moving into the countryside, a family, living environment and human relations with neighbors become important to them. Therefore, the high-quality workplace in their neighborhood, superior living environment and good human relations with neighbors are important. In addition, providing information on living environment and local communities of towns in the countryside helps people to move into those towns.

  20. Real Objects Can Impede Conditional Reasoning but Augmented Objects Do Not.

    PubMed

    Sato, Yuri; Sugimoto, Yutaro; Ueda, Kazuhiro

    2018-03-01

    In this study, Knauff and Johnson-Laird's (2002) visual impedance hypothesis (i.e., mental representations with irrelevant visual detail can impede reasoning) is applied to the domain of external representations and diagrammatic reasoning. We show that the use of real objects and augmented real (AR) objects can control human interpretation and reasoning about conditionals. As participants made inferences (e.g., an invalid one from "if P then Q" to "P"), they also moved objects corresponding to premises. Participants who moved real objects made more invalid inferences than those who moved AR objects and those who did not manipulate objects (there was no significant difference between the last two groups). Our results showed that real objects impeded conditional reasoning, but AR objects did not. These findings are explained by the fact that real objects may over-specify a single state that exists, while AR objects suggest multiple possibilities. Copyright © 2017 Cognitive Science Society, Inc.

  1. Context effects on smooth pursuit and manual interception of a disappearing target.

    PubMed

    Kreyenmeier, Philipp; Fooken, Jolande; Spering, Miriam

    2017-07-01

    In our natural environment, we interact with moving objects that are surrounded by richly textured, dynamic visual contexts. Yet most laboratory studies on vision and movement show visual objects in front of uniform gray backgrounds. Context effects on eye movements have been widely studied, but it is less well known how visual contexts affect hand movements. Here we ask whether eye and hand movements integrate motion signals from target and context similarly or differently, and whether context effects on eye and hand change over time. We developed a track-intercept task requiring participants to track the initial launch of a moving object ("ball") with smooth pursuit eye movements. The ball disappeared after a brief presentation, and participants had to intercept it in a designated "hit zone." In two experiments ( n = 18 human observers each), the ball was shown in front of a uniform or a textured background that either was stationary or moved along with the target. Eye and hand movement latencies and speeds were similarly affected by the visual context, but eye and hand interception (eye position at time of interception, and hand interception timing error) did not differ significantly between context conditions. Eye and hand interception timing errors were strongly correlated on a trial-by-trial basis across all context conditions, highlighting the close relation between these responses in manual interception tasks. Our results indicate that visual contexts similarly affect eye and hand movements but that these effects may be short-lasting, affecting movement trajectories more than movement end points. NEW & NOTEWORTHY In a novel track-intercept paradigm, human observers tracked a briefly shown object moving across a textured, dynamic context and intercepted it with their finger after it had disappeared. Context motion significantly affected eye and hand movement latency and speed, but not interception accuracy; eye and hand position at interception were correlated on a trial-by-trial basis. Visual context effects may be short-lasting, affecting movement trajectories more than movement end points. Copyright © 2017 the American Physiological Society.

  2. Motion detection, novelty filtering, and target tracking using an interferometric technique with a GaAs phase conjugate mirror

    NASA Technical Reports Server (NTRS)

    Cheng, Li-Jen (Inventor); Liu, Tsuen-Hsi (Inventor)

    1990-01-01

    A method and apparatus is disclosed for detecting and tracking moving objects in a noise environment cluttered with fast-and slow-moving objects and other time-varying background. A pair of phase conjugate light beams carrying the same spatial information commonly cancel each other out through an image subtraction process in a phase conjugate interferometer, wherein gratings are formed in a fast photo-refractive phase conjugate mirror material. In the steady state, there is no output. When the optical path of one of the two phase conjugate beams is suddenly changed, the return beam loses its phase conjugate nature and the inter-ferometer is out of balance, resulting in an observable output. The observable output lasts until the phase conjugate nature of the beam has recovered. The observable time of the output signal is roughly equal to the formation time of the grating. If the optical path changing time is slower than the formation time, the change of optical path becomes unobservable, because the index grating can follow the change. Thus, objects traveling at speeds which result in a path changing time which is slower than the formation time are not observable and do not clutter the output image view.

  3. Common world model for unmanned systems

    NASA Astrophysics Data System (ADS)

    Dean, Robert Michael S.

    2013-05-01

    The Robotic Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using metric, semantic, and symbolic information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines. The Common World Model must understand how these objects relate to each other. Our world model includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model includes models of how aspects of the environment behave, which enable prediction of future world states. To manage complexity, we adopted a phased implementation approach to the world model. We discuss the design of "Phase 1" of this world model, and interfaces by tracing perception data through the system from the source to the meta-cognitive layers provided by ACT-R and SS-RICS. We close with lessons learned from implementation and how the design relates to Open Architecture.

  4. Simultaneous Detection and Tracking of Pedestrian from Panoramic Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Xiao, Wen; Vallet, Bruno; Schindler, Konrad; Paparoditis, Nicolas

    2016-06-01

    Pedestrian traffic flow estimation is essential for public place design and construction planning. Traditional data collection by human investigation is tedious, inefficient and expensive. Panoramic laser scanners, e.g. Velodyne HDL-64E, which scan surroundings repetitively at a high frequency, have been increasingly used for 3D object tracking. In this paper, a simultaneous detection and tracking (SDAT) method is proposed for precise and automatic pedestrian trajectory recovery. First, the dynamic environment is detected using two different methods, Nearest-point and Max-distance. Then, all the points on moving objects are transferred into a space-time (x, y, t) coordinate system. The pedestrian detection and tracking amounts to assign the points belonging to pedestrians into continuous trajectories in space-time. We formulate the point assignment task as an energy function which incorporates the point evidence, trajectory number, pedestrian shape and motion. A low energy trajectory will well explain the point observations, and have plausible trajectory trend and length. The method inherently filters out points from other moving objects and false detections. The energy function is solved by a two-step optimization process: tracklet detection in a short temporal window; and global tracklet association through the whole time span. Results demonstrate that the proposed method can automatically recover the pedestrians trajectories with accurate positions and low false detections and mismatches.

  5. Shadow detection of moving objects based on multisource information in Internet of things

    NASA Astrophysics Data System (ADS)

    Ma, Zhen; Zhang, De-gan; Chen, Jie; Hou, Yue-xian

    2017-05-01

    Moving object detection is an important part in intelligent video surveillance under the banner of Internet of things. The detection of moving target's shadow is also an important step in moving object detection. On the accuracy of shadow detection will affect the detection results of the object directly. Based on the variety of shadow detection method, we find that only using one feature can't make the result of detection accurately. Then we present a new method for shadow detection which contains colour information, the invariance of optical and texture feature. Through the comprehensive analysis of the detecting results of three kinds of information, the shadow was effectively determined. It gets ideal effect in the experiment when combining advantages of various methods.

  6. In search of rules behind environmental framing; the case of head pitch.

    PubMed

    Wilson, Gwendoline Ixia; Norman, Brad; Walker, James; Williams, Hannah J; Holton, M D; Clarke, D; Wilson, Rory P

    2015-01-01

    Whether, and how, animals move requires them to assess their environment to determine the most appropriate action and trajectory, although the precise way the environment is scanned has been little studied. We hypothesized that head attitude, which effectively frames the environment for the eyes, and the way it changes over time, would be modulated by the environment. To test this, we used a head-mounted device (Human-Interfaced Personal Observation platform - HIPOP) on people moving through three different environments; a botanical garden ('green' space), a reef ('blue' space), and a featureless corridor, to examine if head movement in the vertical axis differed between environments. Template matching was used to identify and quantify distinct behaviours. The data on head pitch from all subjects and environments over time showed essentially continuous clear waveforms with varying amplitude and wavelength. There were three stylised behaviours consisting of smooth, regular peaks and troughs in head pitch angle and variable length fixations during which the head pitch remained constant. These three behaviours accounted for ca. 40 % of the total time, with irregular head pitch changes accounting for the rest. There were differences in rates of manifestation of behaviour according to environment as well as environmentally different head pitch values of peaks, troughs and fixations. Finally, although there was considerable variation in head pitch angles, the peak and trough values bounded most of the variation in the fixation pitch values. It is suggested that the constant waveforms in head pitch serve to inform people about their environment, providing a scanning mechanism. Particular emphasis to certain sectors is manifest within the peak and trough limits and these appear modulated by the distribution of the points where fixation, interpreted as being due to objects of interest, occurs. This behaviour explains how animals allocate processing resources to the environment and shows promise for movement studies attempting to elucidate which parts of the environment affect movement trajectories.

  7. Localization and tracking of moving objects in two-dimensional space by echolocation.

    PubMed

    Matsuo, Ikuo

    2013-02-01

    Bats use frequency-modulated echolocation to identify and capture moving objects in real three-dimensional space. Experimental evidence indicates that bats are capable of locating static objects with a range accuracy of less than 1 μs. A previously introduced model estimates ranges of multiple, static objects using linear frequency modulation (LFM) sound and Gaussian chirplets with a carrier frequency compatible with bat emission sweep rates. The delay time for a single object was estimated with an accuracy of about 1.3 μs by measuring the echo at a low signal-to-noise ratio (SNR). The range accuracy was dependent not only on the SNR but also the Doppler shift, which was dependent on the movements. However, it was unclear whether this model could estimate the moving object range at each timepoint. In this study, echoes were measured from the rotating pole at two receiving points by intermittently emitting LFM sounds. The model was shown to localize moving objects in two-dimensional space by accurately estimating the object's range at each timepoint.

  8. JACK - ANTHROPOMETRIC MODELING SYSTEM FOR SILICON GRAPHICS WORKSTATIONS

    NASA Technical Reports Server (NTRS)

    Smith, B.

    1994-01-01

    JACK is an interactive graphics program developed at the University of Pennsylvania that displays and manipulates articulated geometric figures. JACK is typically used to observe how a human mannequin interacts with its environment and what effects body types will have upon the performance of a task in a simulated environment. Any environment can be created, and any number of mannequins can be placed anywhere in that environment. JACK includes facilities to construct limited geometric objects, position figures, perform a variety of analyses on the figures, describe the motion of the figures and specify lighting and surface property information for rendering high quality images. JACK is supplied with a variety of body types pre-defined and known to the system. There are both male and female bodies, ranging from the 5th to the 95th percentile, based on NASA Standard 3000. Each mannequin is fully articulated and reflects the joint limitations of a normal human. JACK is an editor for manipulating previously defined objects known as "Peabody" objects. Used to describe the figures as well as the internal data structure for representing them, Peabody is a language with a powerful and flexible mechanism for representing connectivity between objects, both the joints between individual segments within a figure and arbitrary connections between different figures. Peabody objects are generally comprised of several individual figures, each one a collection of segments. Each segment has a geometry represented by PSURF files that consist of polygons or curved surface patches. Although JACK does not have the capability to create new objects, objects may be created by other geometric modeling programs and then translated into the PSURF format. Environment files are a collection of figures and attributes that may be dynamically moved under the control of an animation file. The animation facilities allow the user to create a sequence of commands that duplicate the movements of a human figure in an environment. Integrated into JACK is a set of vision tools that allow predictions about visibility and legibility. The program is capable of displaying environment perspectives corresponding to what the mannequin would see while in the environment, indicating potential problems with occlusion and visibility. It is also possible to display view cones emanating from the figure's eyes, indicating field of view. Another feature projects the environment onto retina coordinates which gives clues regarding visual angles, acuity and occlusion by the biological blind spots. A retina editor makes it possible to draw onto the retina and project that into 3-dimensional space. Another facility, Reach, causes the mannequin to move a specific portion of its anatomy to a chosen point in space. The Reach facility helps in analyzing problems associated with operator size and other constraints. The 17-segment torso makes it possible to set a figure into realistic postures, simulating human postures closely. The JACK application software is written in C-language for Silicon Graphics workstations running IRIX versions 4.0.5 or higher and is available only in executable form. Since JACK is a copyrighted program (copyright 1991 University of Pennsylvania), this executable may not be redistributed. The recommended minimum hardware configuration for running the executable includes a floating-point accelerator, an 8-megabyte program memory, a high resolution (1280 x 1024) graphics card, and at least 50Mb of free disk space. JACK's data files take up millions of bytes of storage space, so additional disk space is highly recommended. The standard distribution medium for JACK is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. JACK was originally developed in 1988. Jack v4.8 was released for distribution through COSMIC in 1993.

  9. Optimal Detection Range of RFID Tag for RFID-based Positioning System Using the k-NN Algorithm.

    PubMed

    Han, Soohee; Kim, Junghwan; Park, Choung-Hwan; Yoon, Hee-Cheon; Heo, Joon

    2009-01-01

    Positioning technology to track a moving object is an important and essential component of ubiquitous computing environments and applications. An RFID-based positioning system using the k-nearest neighbor (k-NN) algorithm can determine the position of a moving reader from observed reference data. In this study, the optimal detection range of an RFID-based positioning system was determined on the principle that tag spacing can be derived from the detection range. It was assumed that reference tags without signal strength information are regularly distributed in 1-, 2- and 3-dimensional spaces. The optimal detection range was determined, through analytical and numerical approaches, to be 125% of the tag-spacing distance in 1-dimensional space. Through numerical approaches, the range was 134% in 2-dimensional space, 143% in 3-dimensional space.

  10. Come Together, Right Now: Dynamic Overwriting of an Object’s History through Common Fate

    PubMed Central

    Luria, Roy; Vogel, Edward K.

    2015-01-01

    The objects around us constantly move and interact, and the perceptual system needs to monitor on-line these interactions and to update the object’s status accordingly. Gestalt grouping principles, such as proximity and common fate, play a fundamental role in how we perceive and group these objects. Here, we investigated situations in which the initial object representation as a separate item was updated by a subsequent Gestalt grouping cue (i.e., proximity or common fate). We used a version of the color change detection paradigm, in which the objects started to move separately, then met and stayed stationary, or moved separately, met, and then continued to move together. We monitored the object representations on-line using the contralateral delay activity (CDA; an ERP component indicative of the number of maintained objects), during their movement, and after the objects disappeared and became working memory representations. The results demonstrated that the objects’ representations (as indicated by the CDA amplitude) persisted as being separate, even after a Gestalt proximity cue (when the objects “met” and remained stationary on the same position). Only a strong common fate Gestalt cue (when the objects not just met but also moved together) was able to override the objects’ initial separate status, creating an integrated representation. These results challenge the view that Gestalt principles cause reflexive grouping. Instead, the object initial representation plays an important role that can override even powerful grouping cues. PMID:24564468

  11. Moving Object Detection on a Vehicle Mounted Back-Up Camera

    PubMed Central

    Kim, Dong-Sun; Kwon, Jinsan

    2015-01-01

    In the detection of moving objects from vision sources one usually assumes that the scene has been captured by stationary cameras. In case of backing up a vehicle, however, the camera mounted on the vehicle moves according to the vehicle’s movement, resulting in ego-motions on the background. This results in mixed motion in the scene, and makes it difficult to distinguish between the target objects and background motions. Without further treatments on the mixed motion, traditional fixed-viewpoint object detection methods will lead to many false-positive detection results. In this paper, we suggest a procedure to be used with the traditional moving object detection methods relaxing the stationary cameras restriction, by introducing additional steps before and after the detection. We also decribe the implementation as a FPGA platform along with the algorithm. The target application of this suggestion is use with a road vehicle’s rear-view camera systems. PMID:26712761

  12. System and method for moving a probe to follow movements of tissue

    NASA Technical Reports Server (NTRS)

    Feldstein, C.; Andrews, T. W.; Crawford, D. W.; Cole, M. A. (Inventor)

    1981-01-01

    An apparatus is described for moving a probe that engages moving living tissue such as a heart or an artery that is penetrated by the probe, which moves the probe in synchronism with the tissue to maintain the probe at a constant location with respect to the tissue. The apparatus includes a servo positioner which moves a servo member to maintain a constant distance from a sensed object while applying very little force to the sensed object, and a follower having a stirrup at one end resting on a surface of the living tissue and another end carrying a sensed object adjacent to the servo member. A probe holder has one end mounted on the servo member and another end which holds the probe.

  13. Moving object detection and tracking in videos through turbulent medium

    NASA Astrophysics Data System (ADS)

    Halder, Kalyan Kumar; Tahtali, Murat; Anavatti, Sreenatha G.

    2016-06-01

    This paper addresses the problem of identifying and tracking moving objects in a video sequence having a time-varying background. This is a fundamental task in many computer vision applications, though a very challenging one because of turbulence that causes blurring and spatiotemporal movements of the background images. Our proposed approach involves two major steps. First, a moving object detection algorithm that deals with the detection of real motions by separating the turbulence-induced motions using a two-level thresholding technique is used. In the second step, a feature-based generalized regression neural network is applied to track the detected objects throughout the frames in the video sequence. The proposed approach uses the centroid and area features of the moving objects and creates the reference regions instantly by selecting the objects within a circle. Simulation experiments are carried out on several turbulence-degraded video sequences and comparisons with an earlier method confirms that the proposed approach provides a more effective tracking of the targets.

  14. Operator-coached machine vision for space telerobotics

    NASA Technical Reports Server (NTRS)

    Bon, Bruce; Wilcox, Brian; Litwin, Todd; Gennery, Donald B.

    1991-01-01

    A prototype system for interactive object modeling has been developed and tested. The goal of this effort has been to create a system which would demonstrate the feasibility of high interactive operator-coached machine vision in a realistic task environment, and to provide a testbed for experimentation with various modes of operator interaction. The purpose for such a system is to use human perception where machine vision is difficult, i.e., to segment the scene into objects and to designate their features, and to use machine vision to overcome limitations of human perception, i.e., for accurate measurement of object geometry. The system captures and displays video images from a number of cameras, allows the operator to designate a polyhedral object one edge at a time by moving a 3-D cursor within these images, performs a least-squares fit of the designated edges to edge data detected with a modified Sobel operator, and combines the edges thus detected to form a wire-frame object model that matches the Sobel data.

  15. Online phase measuring profilometry for rectilinear moving object by image correction

    NASA Astrophysics Data System (ADS)

    Yuan, Han; Cao, Yi-Ping; Chen, Chen; Wang, Ya-Pin

    2015-11-01

    In phase measuring profilometry (PMP), the object must be static for point-to-point reconstruction with the captured deformed patterns. While the object is rectilinearly moving online, the size and pixel position differences of the object in different captured deformed patterns do not meet the point-to-point requirement. We propose an online PMP based on image correction to measure the three-dimensional shape of the rectilinear moving object. In the proposed method, the deformed patterns captured by a charge-coupled diode camera are reprojected from the oblique view to an aerial view first and then translated based on the feature points of the object. This method makes the object appear stationary in the deformed patterns. Experimental results show the feasibility and efficiency of the proposed method.

  16. A novel vehicle tracking algorithm based on mean shift and active contour model in complex environment

    NASA Astrophysics Data System (ADS)

    Cai, Lei; Wang, Lin; Li, Bo; Zhang, Libao; Lv, Wen

    2017-06-01

    Vehicle tracking technology is currently one of the most active research topics in machine vision. It is an important part of intelligent transportation system. However, in theory and technology, it still faces many challenges including real-time and robustness. In video surveillance, the targets need to be detected in real-time and to be calculated accurate position for judging the motives. The contents of video sequence images and the target motion are complex, so the objects can't be expressed by a unified mathematical model. Object-tracking is defined as locating the interest moving target in each frame of a piece of video. The current tracking technology can achieve reliable results in simple environment over the target with easy identified characteristics. However, in more complex environment, it is easy to lose the target because of the mismatch between the target appearance and its dynamic model. Moreover, the target usually has a complex shape, but the tradition target tracking algorithm usually represents the tracking results by simple geometric such as rectangle or circle, so it cannot provide accurate information for the subsequent upper application. This paper combines a traditional object-tracking technology, Mean-Shift algorithm, with a kind of image segmentation algorithm, Active-Contour model, to get the outlines of objects while the tracking process and automatically handle topology changes. Meanwhile, the outline information is used to aid tracking algorithm to improve it.

  17. Moving object localization using optical flow for pedestrian detection from a moving vehicle.

    PubMed

    Hariyono, Joko; Hoang, Van-Dung; Jo, Kang-Hyun

    2014-01-01

    This paper presents a pedestrian detection method from a moving vehicle using optical flows and histogram of oriented gradients (HOG). A moving object is extracted from the relative motion by segmenting the region representing the same optical flows after compensating the egomotion of the camera. To obtain the optical flow, two consecutive images are divided into grid cells 14 × 14 pixels; then each cell is tracked in the current frame to find corresponding cell in the next frame. Using at least three corresponding cells, affine transformation is performed according to each corresponding cell in the consecutive images, so that conformed optical flows are extracted. The regions of moving object are detected as transformed objects, which are different from the previously registered background. Morphological process is applied to get the candidate human regions. In order to recognize the object, the HOG features are extracted on the candidate region and classified using linear support vector machine (SVM). The HOG feature vectors are used as input of linear SVM to classify the given input into pedestrian/nonpedestrian. The proposed method was tested in a moving vehicle and also confirmed through experiments using pedestrian dataset. It shows a significant improvement compared with original HOG using ETHZ pedestrian dataset.

  18. Brain Activation during Spatial Updating and Attentive Tracking of Moving Targets

    ERIC Educational Resources Information Center

    Jahn, Georg; Wendt, Julia; Lotze, Martin; Papenmeier, Frank; Huff, Markus

    2012-01-01

    Keeping aware of the locations of objects while one is moving requires the updating of spatial representations. As long as the objects are visible, attentional tracking is sufficient, but knowing where objects out of view went in relation to one's own body involves an updating of spatial working memory. Here, multiple object tracking was employed…

  19. A Low-Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller

    DTIC Science & Technology

    2017-03-01

    A Low- Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller Jong Hwan Ko...Atlanta, GA 30332 USA Contact Author Email: jonghwan.ko@gatech.edu Abstract: This paper presents a low- power wireless image sensor node for...present a low- power wireless image sensor node with a noise-robust moving object detection and region-of-interest based rate controller [Fig. 1]. The

  20. [Metrological analysis of measuring systems in testing an anticipatory reaction to the position of a moving object].

    PubMed

    Aksiuta, E F; Ostashev, A V; Sergeev, E V; Aksiuta, V E

    1997-01-01

    The methods of the information (entropy) error theory were used to make a metrological analysis of the well-known commercial measuring systems for timing an anticipative reaction (AR) to the position of a moving object, which is based on the electromechanical, gas-discharge, and electron principles. The required accuracy of measurement was ascertained to be achieved only by using the systems based on the electron principle of moving object simulation and AR measurement.

  1. Make the First Move: How Infants Learn about Self-Propelled Objects

    ERIC Educational Resources Information Center

    Rakison, David H.

    2006-01-01

    In 3 experiments, the author investigated 16- to 20-month-old infants' attention to dynamic and static parts in learning about self-propelled objects. In Experiment 1, infants were habituated to simple noncausal events in which a geometric figure with a single moving part started to move without physical contact from an identical geometric figure…

  2. Comparison of bilateral whisker movement in freely exploring and head-fixed adult rats.

    PubMed

    Sellien, Heike; Eshenroder, Donna S; Ebner, Ford F

    2005-09-01

    Rats move their whiskers actively during tactile exploration of their environment. The whiskers emanate from densely innervated whisker follicles that are moved individually by intrinsic facial muscles and as a group by extrinsic muscles. Several descriptions of whisker movements in normal adult rats during unrestrained exploration indicate that rats move their whiskers in the 6-9 Hz range when exploring a new environment. The rate can be elevated to nearly 20 Hz for brief episodes just prior to making a behavioural decision. The present studies were undertaken to compare whisker dynamics in head-restrained and freely moving rats with symmetrical or asymmetrical numbers of whiskers on the two sides of their face and to provide a description of differences in whisker use in exploring rats after trimming all but two whiskers on one side of the face, a condition that has been shown to induce robust cortical plasticity. Head-fixed rats were trained to protract their whiskers against a contact detector with sufficient force to trigger a chocolate milk reward. Whisker movements were analyzed, and the results from head-fixed animals were compared with free-running animals using trials taken during their initial exploration of novel objects that blocked the rat's progress down an elevated runway. The results show that symmetrical whisker movements are modulated both by the nature of the task and the number of whiskers available for exploration. Rats can change their whisker movements when the sensitivity (threshold) of a contact detector is raised or lowered, or when the nature of the task requires bilateral input from the whiskers. We show that trimming some, but not all whiskers on one side of the face modifies the synchrony of whisker movement compared to untrimmed or symmetrically trimmed whiskers.

  3. A new user-assisted segmentation and tracking technique for an object-based video editing system

    NASA Astrophysics Data System (ADS)

    Yu, Hong Y.; Hong, Sung-Hoon; Lee, Mike M.; Choi, Jae-Gark

    2004-03-01

    This paper presents a semi-automatic segmentation method which can be used to generate video object plane (VOP) for object based coding scheme and multimedia authoring environment. Semi-automatic segmentation can be considered as a user-assisted segmentation technique. A user can initially mark objects of interest around the object boundaries and then the user-guided and selected objects are continuously separated from the unselected areas through time evolution in the image sequences. The proposed segmentation method consists of two processing steps: partially manual intra-frame segmentation and fully automatic inter-frame segmentation. The intra-frame segmentation incorporates user-assistance to define the meaningful complete visual object of interest to be segmentation and decides precise object boundary. The inter-frame segmentation involves boundary and region tracking to obtain temporal coherence of moving object based on the object boundary information of previous frame. The proposed method shows stable efficient results that could be suitable for many digital video applications such as multimedia contents authoring, content based coding and indexing. Based on these results, we have developed objects based video editing system with several convenient editing functions.

  4. Hyper-X Program Status

    NASA Technical Reports Server (NTRS)

    McClinton, Charles R.; Rausch, Vincent L.; Sitz, Joel; Reukauf, Paul

    2001-01-01

    This paper provides an overview of the objectives and status of the Hyper-X program, which is tailored to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. The first Hyper-X research vehicle (HXRV), designated X-43, is being prepared at the Dryden Flight Research Center for flight at Mach 7. Extensive risk reduction activities for the first flight are completed, and non-recurring design activities for the Mach 10 X-43 (3rd flight) are nearing completion. The Mach 7 flight of the X-43, in the spring of 2001, will be the first flight of an airframe-integrated scramjet-powered vehicle. The Hyper-X program is continuing to plan follow-on activities to focus an orderly continuation of hypersonic technology development through flight research.

  5. Hyper-X Program Status

    NASA Technical Reports Server (NTRS)

    McClinton, Charles R.; Reubush, David E.; Sitz, Joel; Reukauf, Paul

    2001-01-01

    This paper provides an overview of the objectives and status of the Hyper-X program, which is tailored to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. The first Hyper-X research vehicle (HXRV), designated X-43, is being prepared at the Dryden Flight Research Center for flight at Mach 7. Extensive risk reduction activities for the first flight are completed, and non-recurring design activities for the Mach 10 X-43 (third flight) are nearing completion. The Mach 7 flight of the X-43, in the spring of 2001, will be the first flight of an airframe-integrated scramjet-powered vehicle. The Hyper-X program is continuing to plan follow-on activities to focus an orderly continuation of hypersonic technology development through flight research.

  6. The NASA Hyper-X Program

    NASA Technical Reports Server (NTRS)

    Freeman, Delman C., Jr.; Reubush, Daivd E.; McClinton, Charles R.; Rausch, Vincent L.; Crawford, J. Larry

    1997-01-01

    This paper provides an overview of NASA's Hyper-X Program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an overview of the flight test program, research objectives, approach, schedule and status. Substantial experimental database and concept validation have been completed. The program is currently concentrating on the first, Mach 7, vehicle development, verification and validation in preparation for wind-tunnel testing in 1998 and flight testing in 1999. Parallel to this effort the Mach 5 and 10 vehicle designs are being finalized. Detailed analytical and experimental evaluation of the Mach 7 vehicle at the flight conditions is nearing completion, and will provide a database for validation of design methods once flight test data are available.

  7. SKITTER/implement mechanical interface

    NASA Technical Reports Server (NTRS)

    Cash, John Wilson, III; Cone, Alan E.; Garolera, Frank J.; German, David; Lindabury, David Peter; Luckado, Marshall Cleveland; Murphey, Craig; Rowell, John Bryan; Wilkinson, Brad

    1988-01-01

    SKITTER (Spacial Kinematic Inertial Translatory Tripod Extremity Robot) is a three-legged transport vehicle designed to perform under the unique environment of the moon. The objective of this project was to design a mechanical interface for SKITTER. This mechanical latching interface will allow SKITTER to use a series of implements such as drills, cranes, etc., and perform different tasks on the moon. The design emphasized versatility and detachability; that is, the interface design is the same for all implements, and connection and detachment is simple. After consideration of many alternatives, a system of three identical latches at each of the three interface points was chosen. The latching mechanism satisfies the design constraints because it facilitates connection and detachment. Also, the moving parts are protected from the dusty environment by housing plates.

  8. Interactions Between Convective Storms and Their Environment

    NASA Technical Reports Server (NTRS)

    Maddox, R. A.; Hoxit, L. R.; Chappell, C. F.

    1979-01-01

    The ways in which intense convective storms interact with their environment are considered for a number of specific severe storm situations. A physical model of subcloud wind fields and vertical wind profiles was developed to explain the often observed intensification of convective storms that move along or across thermal boundaries. A number of special, unusually dense, data sets were used to substantiate features of the model. GOES imagery was used in conjunction with objectively analyzed surface wind data to develop a nowcast technique that might be used to identify specific storm cells likely to become tornadic. It was shown that circulations associated with organized meso-alpha and meso-beta scale storm complexes may, on occasion, strongly modify tropospheric thermodynamic patterns and flow fields.

  9. Binocular Perception of 2D Lateral Motion and Guidance of Coordinated Motor Behavior.

    PubMed

    Fath, Aaron J; Snapp-Childs, Winona; Kountouriotis, Georgios K; Bingham, Geoffrey P

    2016-04-01

    Zannoli, Cass, Alais, and Mamassian (2012) found greater audiovisual lag between a tone and disparity-defined stimuli moving laterally (90-170 ms) than for disparity-defined stimuli moving in depth or luminance-defined stimuli moving laterally or in depth (50-60 ms). We tested if this increased lag presents an impediment to visually guided coordination with laterally moving objects. Participants used a joystick to move a virtual object in several constant relative phases with a laterally oscillating stimulus. Both the participant-controlled object and the target object were presented using a disparity-defined display that yielded information through changes in disparity over time (CDOT) or using a luminance-defined display that additionally provided information through monocular motion and interocular velocity differences (IOVD). Performance was comparable for both disparity-defined and luminance-defined displays in all relative phases. This suggests that, despite lag, perception of lateral motion through CDOT is generally sufficient to guide coordinated motor behavior.

  10. Emerald: an object-based language for distributed programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, N.C.

    1987-01-01

    Distributed systems have become more common, however constructing distributed applications remains a very difficult task. Numerous operating systems and programming languages have been proposed that attempt to simplify the programming of distributed applications. Here a programing language called Emerald is presented that simplifies distributed programming by extending the concepts of object-based languages to the distributed environment. Emerald supports a single model of computation: the object. Emerald objects include private entities such as integers and Booleans, as well as shared, distributed entities such as compilers, directories, and entire file systems. Emerald objects may move between machines in the system, but objectmore » invocation is location independent. The uniform semantic model used for describing all Emerald objects makes the construction of distributed applications in Emerald much simpler than in systems where the differences in implementation between local and remote entities are visible in the language semantics. Emerald incorporates a type system that deals only with the specification of objects - ignoring differences in implementation. Thus, two different implementations of the same abstraction may be freely mixed.« less

  11. Map showing inventory and regional susceptibility for Holocene debris flows, and related fast-moving landslides in the conterminous United States

    USGS Publications Warehouse

    Brabb, Earl E.; Colgan, Joseph P.; Best, Timothy C.

    2000-01-01

    Introduction Debris flows, debris avalanches, mud flows and lahars are fast-moving landslides that occur in a wide variety of environments throughout the world. They are particularly dangerous to life and property because they move quickly, destroy objects in their paths, and often strike without warning. This map represents a significant effort to compile the locations of known debris flows in United Stated and predict where future flows might occur. The files 'dfipoint.e00' and 'dfipoly.e00' contain the locations of over 6600 debris flows from published and unpublished sources. The locations are referenced by numbers that correspond to entries in a bibliography, which is part of the pamphlet 'mf2329pamphlet.pdf'. The areas of possible future debris flows are shown in the file 'susceptibility.tif', which is a georeferenced TIFF file that can be opened in an image editing program or imported into a GIS system like ARC/INFO. All other databases are in ARC/INFO export (.e00) format.

  12. Conveyor with rotary airlock apparatus

    DOEpatents

    Kronbert, J.W.

    1993-01-01

    This invention is comprised of an apparatus for transferring objects from a first region to a second region, the first and second regions having differing atmospheric environments. The apparatus includes a shell having an entrance and an exit, a conveyer belt running through the shell from the entrance to the exit, and a horizontally mounted `revolving door` with at least four vanes revolving about its axis. The inner surface of the shell and the top surface of the conveyer belt act as opposing walls of the `revolving door`. The conveyer belt dips as it passes under but against the revolving vanes so as not to interfere with them but to engage at least two of the vanes and define thereby a moving chamber. Preferably, the conveyer belt has ridges or grooves on its surface that engage the edges of the vanes and act to rotate the vane assembly. Conduits are provided that communicate with the interior of the shell and allow the adjustment of the atmosphere of the moving chamber or recovery of constituents of the atmosphere of the first region from the moving chamber before they escape to the second region.

  13. Conveyor with rotary airlock apparatus

    DOEpatents

    Kronberg, James W.

    1995-01-01

    An apparatus for transferring objects from a first region to a second reg, the first and second regions having differing atmospheric environments. The apparatus includes a shell having an entrance and an exit, a conveyor belt running through the shell from the entrance to the exit, and a horizontally mounted "revolving door" with at least four vanes revolving about its axis. The inner surface of the shell and the top surface of the conveyor belt act as opposing walls of the "revolving door." The conveyor belt dips as it passes under but against the revolving vanes so as not to interfere with them but to engage at least two of the vanes and define thereby a moving chamber. Preferably, the conveyor belt has ridges or grooves on its surface that engage the edges of the vanes and act to rotate the vane assembly. Conduits are provided that communicate with the interior of the shell and allow the adjustment of the atmosphere of the moving chamber or recovery of constituents of the atmosphere of the first region from the moving chamber before they escape to the second region.

  14. A freely-moving monkey treadmill model

    NASA Astrophysics Data System (ADS)

    Foster, Justin D.; Nuyujukian, Paul; Freifeld, Oren; Gao, Hua; Walker, Ross; Ryu, Stephen I.; Meng, Teresa H.; Murmann, Boris; Black, Michael J.; Shenoy, Krishna V.

    2014-08-01

    Objective. Motor neuroscience and brain-machine interface (BMI) design is based on examining how the brain controls voluntary movement, typically by recording neural activity and behavior from animal models. Recording technologies used with these animal models have traditionally limited the range of behaviors that can be studied, and thus the generality of science and engineering research. We aim to design a freely-moving animal model using neural and behavioral recording technologies that do not constrain movement. Approach. We have established a freely-moving rhesus monkey model employing technology that transmits neural activity from an intracortical array using a head-mounted device and records behavior through computer vision using markerless motion capture. We demonstrate the flexibility and utility of this new monkey model, including the first recordings from motor cortex while rhesus monkeys walk quadrupedally on a treadmill. Main results. Using this monkey model, we show that multi-unit threshold-crossing neural activity encodes the phase of walking and that the average firing rate of the threshold crossings covaries with the speed of individual steps. On a population level, we find that neural state-space trajectories of walking at different speeds have similar rotational dynamics in some dimensions that evolve at the step rate of walking, yet robustly separate by speed in other state-space dimensions. Significance. Freely-moving animal models may allow neuroscientists to examine a wider range of behaviors and can provide a flexible experimental paradigm for examining the neural mechanisms that underlie movement generation across behaviors and environments. For BMIs, freely-moving animal models have the potential to aid prosthetic design by examining how neural encoding changes with posture, environment and other real-world context changes. Understanding this new realm of behavior in more naturalistic settings is essential for overall progress of basic motor neuroscience and for the successful translation of BMIs to people with paralysis.

  15. A Quick Look at Supernova 1987A

    NASA Image and Video Library

    2017-02-24

    On February 24, 1987, astronomers in the southern hemisphere saw a supernova in the Large Magellanic Cloud. This new object was dubbed “Supernova 1987A” and was the brightest stellar explosion seen in over four centuries. Chandra has observed Supernova 1987A many times and the X-ray data reveal important information about this object. X-rays from Chandra have shown the expanding blast wave from the original explosion slamming into a ring of material expelled by the star before it exploded. The latest Chandra data reveal the blast wave has moved beyond the ring into a region that astronomers do not know much about. These observations can help astronomers learn how supernovas impact their environments and affect future generations of stars and planets.

  16. Moving In, Moving Through, and Moving Out: The Transitional Experiences of Foster Youth College Students

    ERIC Educational Resources Information Center

    Gamez, Sara I.

    2017-01-01

    The purpose of this qualitative study was to explore the transitional experiences of foster youth college students. The study explored how foster youth experienced moving into, moving through, and moving out of the college environment and what resources and strategies they used to thrive during their college transitions. In addition, this study…

  17. Distance estimation and collision prediction for on-line robotic motion planning

    NASA Technical Reports Server (NTRS)

    Kyriakopoulos, K. J.; Saridis, G. N.

    1992-01-01

    An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.

  18. Acquisition of stereo panoramas for display in VR environments

    NASA Astrophysics Data System (ADS)

    Ainsworth, Richard A.; Sandin, Daniel J.; Schulze, Jurgen P.; Prudhomme, Andrew; DeFanti, Thomas A.; Srinivasan, Madhusudhanan

    2011-03-01

    Virtual reality systems are an excellent environment for stereo panorama displays. The acquisition and display methods described here combine high-resolution photography with surround vision and full stereo view in an immersive environment. This combination provides photographic stereo-panoramas for a variety of VR displays, including the StarCAVE, NexCAVE, and CORNEA. The zero parallax point used in conventional panorama photography is also the center of horizontal and vertical rotation when creating photographs for stereo panoramas. The two photographically created images are displayed on a cylinder or a sphere. The radius from the viewer to the image is set at approximately 20 feet, or at the object of major interest. A full stereo view is presented in all directions. The interocular distance, as seen from the viewer's perspective, displaces the two spherical images horizontally. This presents correct stereo separation in whatever direction the viewer is looking, even up and down. Objects at infinity will move with the viewer, contributing to an immersive experience. Stereo panoramas created with this acquisition and display technique can be applied without modification to a large array of VR devices having different screen arrangements and different VR libraries.

  19. An artificial reality environment for remote factory control and monitoring

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    Work has begun on the merger of two well known systems, VEOS (HITLab) and CLIPS (NASA). In the recent past, the University of Massachusetts Lowell developed a parallel version of NASA CLIPS, called P-CLIPS. This modification allows users to create smaller expert systems which are able to communicate with each other to jointly solve problems. With the merger of a VEOS message system, PCLIPS-V can now act as a group of entities working within VEOS. To display the 3D virtual world we have been using a graphics package called HOOPS, from Ithaca Software. The artificial reality environment we have set up contains actors and objects as found in our Lincoln Logs Factory of the Future project. The environment allows us to view and control the objects within the virtual world. All communication between the separate CLIPS expert systems is done through VEOS. A graphical renderer generates camera views on X-Windows devices; Head Mounted Devices are not required. This allows more people to make use of this technology. We are experimenting with different types of virtual vehicles to give the user a sense that he or she is actually moving around inside the factory looking ahead through windows and virtual monitors.

  20. Searching for moving objects in HSC-SSP: Pipeline and preliminary results

    NASA Astrophysics Data System (ADS)

    Chen, Ying-Tung; Lin, Hsing-Wen; Alexandersen, Mike; Lehner, Matthew J.; Wang, Shiang-Yu; Wang, Jen-Hung; Yoshida, Fumi; Komiyama, Yutaka; Miyazaki, Satoshi

    2018-01-01

    The Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) is currently the deepest wide-field survey in progress. The 8.2 m aperture of the Subaru telescope is very powerful in detecting faint/small moving objects, including near-Earth objects, asteroids, centaurs and Tran-Neptunian objects (TNOs). However, the cadence and dithering pattern of the HSC-SSP are not designed for detecting moving objects, making it difficult to do so systematically. In this paper, we introduce a new pipeline for detecting moving objects (specifically TNOs) in a non-dedicated survey. The HSC-SSP catalogs are sliced into HEALPix partitions. Then, the stationary detections and false positives are removed with a machine-learning algorithm to produce a list of moving object candidates. An orbit linking algorithm and visual inspections are executed to generate the final list of detected TNOs. The preliminary results of a search for TNOs using this new pipeline on data from the first HSC-SSP data release (2014 March to 2015 November) present 231 TNO/Centaurs candidates. The bright candidates with Hr < 7.7 and i > 5 show that the best-fitting slope of a single power law to absolute magnitude distribution is 0.77. The g - r color distribution of hot HSC-SSP TNOs indicates a bluer peak at g - r = 0.9, which is consistent with the bluer peak of the bimodal color distribution in literature.

  1. Method and apparatus for hybrid position/force control of multi-arm cooperating robots

    NASA Technical Reports Server (NTRS)

    Hayati, Samad A. (Inventor)

    1989-01-01

    Two or more robotic arms having end effectors rigidly attached to an object to be moved are disclosed. A hybrid position/force control system is provided for driving each of the robotic arms. The object to be moved is represented as having a total mass that consists of the actual mass of the object to be moved plus the mass of the moveable arms that are rigidly attached to the moveable object. The arms are driven in a positive way by the hybrid control system to assure that each arm shares in the position/force applied to the object. The burden of actuation is shared by each arm in a non-conflicting way as the arm independently control the position of, and force upon, a designated point on the object.

  2. A Paradigm Shift to Protect Environment

    EPA Science Inventory

    Attempts to protect the environment have primarily been remedial with the intent to move away from environmental problems. Congressional agendas have provided specific acts related to pollution of air, water, and toxic wastes. These acts provide the regulatory powers to move away...

  3. Tracking Algorithm of Multiple Pedestrians Based on Particle Filters in Video Sequences

    PubMed Central

    Liu, Yun; Wang, Chuanxu; Zhang, Shujun; Cui, Xuehong

    2016-01-01

    Pedestrian tracking is a critical problem in the field of computer vision. Particle filters have been proven to be very useful in pedestrian tracking for nonlinear and non-Gaussian estimation problems. However, pedestrian tracking in complex environment is still facing many problems due to changes of pedestrian postures and scale, moving background, mutual occlusion, and presence of pedestrian. To surmount these difficulties, this paper presents tracking algorithm of multiple pedestrians based on particle filters in video sequences. The algorithm acquires confidence value of the object and the background through extracting a priori knowledge thus to achieve multipedestrian detection; it adopts color and texture features into particle filter to get better observation results and then automatically adjusts weight value of each feature according to current tracking environment. During the process of tracking, the algorithm processes severe occlusion condition to prevent drift and loss phenomena caused by object occlusion and associates detection results with particle state to propose discriminated method for object disappearance and emergence thus to achieve robust tracking of multiple pedestrians. Experimental verification and analysis in video sequences demonstrate that proposed algorithm improves the tracking performance and has better tracking results. PMID:27847514

  4. Research on moving object detection based on frog's eyes

    NASA Astrophysics Data System (ADS)

    Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan

    2008-12-01

    On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.

  5. Motion compensation for structured light sensors

    NASA Astrophysics Data System (ADS)

    Biswas, Debjani; Mertz, Christoph

    2015-05-01

    In order for structured light methods to work outside, the strong background from the sun needs to be suppressed. This can be done with bandpass filters, fast shutters, and background subtraction. In general this last method necessitates the sensor system to be stationary during data taking. The contribution of this paper is a method to compensate for the motion if the system is moving. The key idea is to use video stabilization techniques that work even if the illuminator is switched on and off from one frame to another. We used OpenCV functions and modules to implement a robust and efficient method. We evaluated it under various conditions and tested it on a moving robot outdoors. We will demonstrate that one can not only do 3D reconstruction under strong ambient light, but that it is also possible to observe optical properties of the objects in the environment.

  6. Ground moving target geo-location from monocular camera mounted on a micro air vehicle

    NASA Astrophysics Data System (ADS)

    Guo, Li; Ang, Haisong; Zheng, Xiangming

    2011-08-01

    The usual approaches to unmanned air vehicle(UAV)-to-ground target geo-location impose some severe constraints to the system, such as stationary objects, accurate geo-reference terrain database, or ground plane assumption. Micro air vehicle(MAV) works with characteristics including low altitude flight, limited payload and onboard sensors' low accuracy. According to these characteristics, a method is developed to determine the location of ground moving target which imaged from the air using monocular camera equipped on MAV. This method eliminates the requirements for terrain database (elevation maps) and altimeters that can provide MAV's and target's altitude. Instead, the proposed method only requires MAV flight status provided by its inherent onboard navigation system which includes inertial measurement unit(IMU) and global position system(GPS). The key is to get accurate information on the altitude of the ground moving target. First, Optical flow method extracts background static feature points. Setting a local region around the target in the current image, The features which are on the same plane with the target in this region are extracted, and are retained as aided features. Then, inverse-velocity method calculates the location of these points by integrated with aircraft status. The altitude of object, which is calculated by using position information of these aided features, combining with aircraft status and image coordinates, geo-locate the target. Meanwhile, a framework with Bayesian estimator is employed to eliminate noise caused by camera, IMU and GPS. Firstly, an extended Kalman filter(EKF) provides a simultaneous localization and mapping solution for the estimation of aircraft states and aided features location which defines the moving target local environment. Secondly, an unscented transformation(UT) method determines the estimated mean and covariance of target location from aircraft states and aided features location, and then exports them for the moving target Kalman filter(KF). Experimental results show that our method can instantaneously geo-locate the moving target by operator's single click and can reach 15 meters accuracy for an MAV flying at 200 meters above the ground.

  7. Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space

    PubMed Central

    Chen, Min; Hashimoto, Koichi

    2017-01-01

    Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189

  8. Early Knowledge of Object Motion: Continuity and Inertia.

    ERIC Educational Resources Information Center

    Spelke, Elizabeth; And Others

    1994-01-01

    Investigated whether infants infer that a hidden, freely moving object will move continuously and smoothly. Six- to 10- month olds inferred that the object's path would be connected and unobstructed, in accord with continuity. Younger infants did not infer this, in accord with inertia. At 8 and 10 months, knowledge of inertia emerged but remained…

  9. Visual Sensor Based Abnormal Event Detection with Moving Shadow Removal in Home Healthcare Applications

    PubMed Central

    Lee, Young-Sook; Chung, Wan-Young

    2012-01-01

    Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486

  10. Security Applications Of Computer Motion Detection

    NASA Astrophysics Data System (ADS)

    Bernat, Andrew P.; Nelan, Joseph; Riter, Stephen; Frankel, Harry

    1987-05-01

    An important area of application of computer vision is the detection of human motion in security systems. This paper describes the development of a computer vision system which can detect and track human movement across the international border between the United States and Mexico. Because of the wide range of environmental conditions, this application represents a stringent test of computer vision algorithms for motion detection and object identification. The desired output of this vision system is accurate, real-time locations for individual aliens and accurate statistical data as to the frequency of illegal border crossings. Because most detection and tracking routines assume rigid body motion, which is not characteristic of humans, new algorithms capable of reliable operation in our application are required. Furthermore, most current detection and tracking algorithms assume a uniform background against which motion is viewed - the urban environment along the US-Mexican border is anything but uniform. The system works in three stages: motion detection, object tracking and object identi-fication. We have implemented motion detection using simple frame differencing, maximum likelihood estimation, mean and median tests and are evaluating them for accuracy and computational efficiency. Due to the complex nature of the urban environment (background and foreground objects consisting of buildings, vegetation, vehicles, wind-blown debris, animals, etc.), motion detection alone is not sufficiently accurate. Object tracking and identification are handled by an expert system which takes shape, location and trajectory information as input and determines if the moving object is indeed representative of an illegal border crossing.

  11. Fabrication and wireless micromanipulation of magnetic-biocompatible microrobots using microencapsulation for microrobotics and microfluidics applications.

    PubMed

    Li, Hui; Zhang, Jinyong; Zhang, Nannan; Kershaw, Joe; Wang, Lei

    2016-12-01

    It is important to fabricate biocompatible and chemical-resistant microstructures that can be powered and controlled without a tether in fluid environment for applications when contamination must be avoided, like cell manipulation, and applications where connecting the power source to the actuator would be cumbersome, like targeted delivery of chemicals. In this work, a novel fabrication method was described to encapsulate magnetic composite into pure SU-8 structures, enabling the truly microscale ferromagnetic microrobots biocompatible and chemical resistant. The microrobots were developed using the simple multilayer photolithography that allows us to mass produce and were actuated contact-free by external magnetic field to complete micromanipulations of micro-objects. The microrobots were actuated moving along a preplanned path to transport a glass microsphere object at an approximately average speed of 1.1 mm/sec and can be operated to rotate, aim at targets and collect objects.

  12. Measuring Drag Force in Newtonian Liquids

    NASA Astrophysics Data System (ADS)

    Mawhinney, Matthew T.; O'Donnell, Mary Kate; Fingerut, Jonathan; Habdas, Piotr

    2012-03-01

    The experiments described in this paper have two goals. The first goal is to show how students can perform simple but fundamental measurements of objects moving through simple liquids (such as water, oil, or honey). In doing so, students can verify Stokes' law, which governs the motion of spheres through simple liquids, and see how it fails at higher object speeds. Moreover, they can qualitatively study fluid patterns at various object speeds (Reynolds numbers). The second goal is to help students make connections between physics and other sciences. Specifically, the results of these experiments can be used to help students understand the role of fluid motion in determining the shape of an organism, or where it lives. At Saint Josephs University we have developed these experiments as part of a newly developed course in biomechanics where both physics and biology undergraduate students bring their ideas and expertise to enrich a shared learning environment.

  13. Distance underestimation in virtual space is sensitive to gender but not activity-passivity or mode of interaction.

    PubMed

    Foreman, Nigel; Sandamas, George; Newson, David

    2004-08-01

    Four groups of undergraduates (half of each gender) experienced a movement along a corridor containing three distinctive objects, in a virtual environment (VE) with wide-screen projection. One group simulated walking along the virtual corridor using a proprietary step-exercise device. A second group moved along the corridor in conventional flying mode, depressing a keyboard key to initiate continuous forward motion. Two further groups observed the walking and flying participants, by viewing their progress on the screen. Participants then had to walk along a real equivalent but empty corridor, and indicate the positions of the three objects. All groups underestimated distances in the real corridor, the greatest underestimates occurring for the middle distance object. Males' underestimations were significantly lower than females' at all distances. However, there was no difference between the active participants and passive observers, nor between walking and flying conditions.

  14. Dynamic Stimuli And Active Processing In Human Visual Perception

    NASA Astrophysics Data System (ADS)

    Haber, Ralph N.

    1990-03-01

    Theories of visual perception traditionally have considered a static retinal image to be the starting point for processing; and has considered processing both to be passive and a literal translation of that frozen, two dimensional, pictorial image. This paper considers five problem areas in the analysis of human visually guided locomotion, in which the traditional approach is contrasted to newer ones that utilize dynamic definitions of stimulation, and an active perceiver: (1) differentiation between object motion and self motion, and among the various kinds of self motion (e.g., eyes only, head only, whole body, and their combinations); (2) the sources and contents of visual information that guide movement; (3) the acquisition and performance of perceptual motor skills; (4) the nature of spatial representations, percepts, and the perceived layout of space; and (5) and why the retinal image is a poor starting point for perceptual processing. These newer approaches argue that stimuli must be considered as dynamic: humans process the systematic changes in patterned light when objects move and when they themselves move. Furthermore, the processing of visual stimuli must be active and interactive, so that perceivers can construct panoramic and stable percepts from an interaction of stimulus information and expectancies of what is contained in the visual environment. These developments all suggest a very different approach to the computational analyses of object location and identification, and of the visual guidance of locomotion.

  15. Acoustic system for material transport

    NASA Technical Reports Server (NTRS)

    Barmatz, M. B.; Trinh, E. H.; Wang, T. G.; Elleman, D. D.; Jacobi, N. (Inventor)

    1983-01-01

    An object within a chamber is acoustically moved by applying wavelengths of different modes to the chamber to move the object between pressure wells formed by the modes. In one system, the object is placed in one end of the chamber while a resonant mode, applied along the length of the chamber, produces a pressure well at the location. The frequency is then switched to a second mode that produces a pressure well at the center of the chamber, to draw the object. When the object reaches the second pressure well and is still traveling towards the second end of the chamber, the acoustic frequency is again shifted to a third mode (which may equal the first model) that has a pressure well in the second end portion of the chamber, to draw the object. A heat source may be located near the second end of the chamber to heat the sample, and after the sample is heated it can be cooled by moving it in a corresponding manner back to the first end of the chamber. The transducers for levitating and moving the object may be all located at the cool first end of the chamber.

  16. Use of geometric properties of landmark arrays for reorientation relative to remote cities and local objects.

    PubMed

    Mou, Weimin; Nankoo, Jean-François; Zhou, Ruojing; Spetch, Marcia L

    2014-03-01

    Five experiments investigated how human adults use landmark arrays in the immediate environment to reorient relative to the local environment and relative to remote cities. Participants learned targets' directions with the presence of a proximal 4 poles forming a rectangular shape and an array of more distal poles forming a rectangular shape. Then participants were disoriented and pointed to targets with the presence of the proximal poles or the distal poles. Participants' orientation was estimated by the mean of their pointing error across targets. The targets could be 7 objects in the immediate local environment in which the poles were located or 7 cities around Edmonton (Alberta, Canada) where the experiments occurred. The directions of the 7 cities could be learned from reading a map first and then from pointing to the cities when the poles were presented. The directions of the 7 cities could also be learned from viewing labels of cities moving back and forth in the specific direction in the immediate local environment in which the poles were located. The shape of the array of the distal poles varied in salience by changing the number of poles on each edge of the rectangle (2 vs. 34). The results showed that participants regained their orientation relative to local objects using the distal poles with 2 poles on each edge; participants could not reorient relative to cities using the distal pole array with 2 poles on each edge but could reorient relative to cities using the distal pole array with 34 poles on each edge. These results indicate that use of cues in reorientation depends not only on the cue salience but also on which environment people need to reorient to.

  17. Visual context modulates potentiation of grasp types during semantic object categorization.

    PubMed

    Kalénine, Solène; Shapiro, Allison D; Flumini, Andrea; Borghi, Anna M; Buxbaum, Laurel J

    2014-06-01

    Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use-compatible, as compared with move-compatible, contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions.

  18. Visual context modulates potentiation of grasp types during semantic object categorization

    PubMed Central

    Kalénine, Solène; Shapiro, Allison D.; Flumini, Andrea; Borghi, Anna M.; Buxbaum, Laurel J.

    2013-01-01

    Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it, but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use- compared to move-compatible contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions. PMID:24186270

  19. Moving from Virtual Reality Exposure-Based Therapy to Augmented Reality Exposure-Based Therapy: A Review

    PubMed Central

    Baus, Oliver; Bouchard, Stéphane

    2014-01-01

    This paper reviews the move from virtual reality exposure-based therapy to augmented reality exposure-based therapy (ARET). Unlike virtual reality (VR), which entails a complete virtual environment (VE), augmented reality (AR) limits itself to producing certain virtual elements to then merge them into the view of the physical world. Although, the general public may only have become aware of AR in the last few years, AR type applications have been around since beginning of the twentieth century. Since, then, technological developments have enabled an ever increasing level of seamless integration of virtual and physical elements into one view. Like VR, AR allows the exposure to stimuli which, due to various reasons, may not be suitable for real-life scenarios. As such, AR has proven itself to be a medium through which individuals suffering from specific phobia can be exposed “safely” to the object(s) of their fear, without the costs associated with programing complete VEs. Thus, ARET can offer an efficacious alternative to some less advantageous exposure-based therapies. Above and beyond presenting what has been accomplished in ARET, this paper covers some less well-known aspects of the history of AR, raises some ARET related issues, and proposes potential avenues to be followed. These include the type of measures to be used to qualify the user’s experience in an augmented reality environment, the exclusion of certain AR-type functionalities from the definition of AR, as well as the potential use of ARET to treat non-small animal phobias, such as social phobia. PMID:24624073

  20. Moving from virtual reality exposure-based therapy to augmented reality exposure-based therapy: a review.

    PubMed

    Baus, Oliver; Bouchard, Stéphane

    2014-01-01

    This paper reviews the move from virtual reality exposure-based therapy to augmented reality exposure-based therapy (ARET). Unlike virtual reality (VR), which entails a complete virtual environment (VE), augmented reality (AR) limits itself to producing certain virtual elements to then merge them into the view of the physical world. Although, the general public may only have become aware of AR in the last few years, AR type applications have been around since beginning of the twentieth century. Since, then, technological developments have enabled an ever increasing level of seamless integration of virtual and physical elements into one view. Like VR, AR allows the exposure to stimuli which, due to various reasons, may not be suitable for real-life scenarios. As such, AR has proven itself to be a medium through which individuals suffering from specific phobia can be exposed "safely" to the object(s) of their fear, without the costs associated with programing complete VEs. Thus, ARET can offer an efficacious alternative to some less advantageous exposure-based therapies. Above and beyond presenting what has been accomplished in ARET, this paper covers some less well-known aspects of the history of AR, raises some ARET related issues, and proposes potential avenues to be followed. These include the type of measures to be used to qualify the user's experience in an augmented reality environment, the exclusion of certain AR-type functionalities from the definition of AR, as well as the potential use of ARET to treat non-small animal phobias, such as social phobia.

  1. Person detection and tracking with a 360° lidar system

    NASA Astrophysics Data System (ADS)

    Hammer, Marcus; Hebel, Marcus; Arens, Michael

    2017-10-01

    Today it is easily possible to generate dense point clouds of the sensor environment using 360° LiDAR (Light Detection and Ranging) sensors which are available since a number of years. The interpretation of these data is much more challenging. For the automated data evaluation the detection and classification of objects is a fundamental task. Especially in urban scenarios moving objects like persons or vehicles are of particular interest, for instance in automatic collision avoidance, for mobile sensor platforms or surveillance tasks. In literature there are several approaches for automated person detection in point clouds. While most techniques show acceptable results in object detection, the computation time is often crucial. The runtime can be problematic, especially due to the amount of data in the panoramic 360° point clouds. On the other hand, for most applications an object detection and classification in real time is needed. The paper presents a proposal for a fast, real-time capable algorithm for person detection, classification and tracking in panoramic point clouds.

  2. Feeding State Modulates Behavioral Choice and Processing of Prey Stimuli in the Zebrafish Tectum.

    PubMed

    Filosa, Alessandro; Barker, Alison J; Dal Maschio, Marco; Baier, Herwig

    2016-05-04

    Animals use the sense of vision to scan their environment, respond to threats, and locate food sources. The neural computations underlying the selection of a particular behavior, such as escape or approach, require flexibility to balance potential costs and benefits for survival. For example, avoiding novel visual objects reduces predation risk but negatively affects foraging success. Zebrafish larvae approach small, moving objects ("prey") and avoid large, looming objects ("predators"). We found that this binary classification of objects by size is strongly influenced by feeding state. Hunger shifts behavioral decisions from avoidance to approach and recruits additional prey-responsive neurons in the tectum, the main visual processing center. Both behavior and tectal function are modulated by signals from the hypothalamic-pituitary-interrenal axis and the serotonergic system. Our study has revealed a neuroendocrine mechanism that modulates the perception of food and the willingness to take risks in foraging decisions. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Intersensory Redundancy Facilitates Learning of Arbitrary Relations between Vowel Sounds and Objects in Seven-Month-Old Infants.

    ERIC Educational Resources Information Center

    Gogate, Lakshmi J.; Bahrick, Lorraine E.

    1998-01-01

    Investigated 7-month olds' ability to relate vowel sounds with objects when intersensory redundancy was present versus absent. Found that infants detected a mismatch in the vowel-object pairs in the moving-synchronous condition but not in the still or moving-asynchronous condition, demonstrating that temporal synchrony between vocalizations and…

  4. May the Force Be with You!

    ERIC Educational Resources Information Center

    Young, Timothy; Guy, Mark

    2011-01-01

    Students have a difficult time understanding force, especially when dealing with a moving object. Many forces can be acting on an object at the same time, causing it to stay in one place or move. By directly observing these forces, students can better understand the effect these forces have on an object. With a simple, student-built device called…

  5. Curious creatures: a multi-taxa investigation of responses to novelty in a zoo environment.

    PubMed

    Hall, Belinda A; Melfi, Vicky; Burns, Alicia; McGill, David M; Doyle, Rebecca E

    2018-01-01

    The personality trait of curiosity has been shown to increase welfare in humans. If this positive welfare effect is also true for non-humans, animals with high levels of curiosity may be able to cope better with stressful situations than their conspecifics. Before discoveries can be made regarding the effect of curiosity on an animal's ability to cope in their environment, a way of measuring curiosity across species in different environments must be created to standardise testing. To determine the suitability of novel objects in testing curiosity, species from different evolutionary backgrounds with sufficient sample sizes were chosen. Barbary sheep ( Ammotragus lervia) n  = 12, little penguins ( Eudyptula minor) n  = 10, ringtail lemurs ( Lemur catta) n  = 8 , red tailed black cockatoos ( Calyptorhynchus banksia) n  = 7, Indian star tortoises ( Geochelone elegans) n  = 5 and red kangaroos ( Macropus rufus) n  = 5 were presented with a stationary object, a moving object and a mirror. Having objects with different characteristics increased the likelihood individuals would find at least one motivating. Conspecifics were all assessed simultaneously for time to first orientate towards object (s), latency to make contact (s), frequency of interactions, and total duration of interaction (s). Differences in curiosity were recorded in four of the six species; the Barbary sheep and red tailed black cockatoos did not interact with the novel objects suggesting either a low level of curiosity or that the objects were not motivating for these animals. Variation in curiosity was seen between and within species in terms of which objects they interacted with and how long they spent with the objects. This was determined by the speed in which they interacted, and the duration of interest. By using the measure of curiosity towards novel objects with varying characteristics across a range of zoo species, we can see evidence of evolutionary, husbandry and individual influences on their response. Further work to obtain data on multiple captive populations of a single species using a standardised method could uncover factors that nurture the development of curiosity. In doing so, it would be possible to isolate and modify sub-optimal husbandry practices to improve welfare in the zoo environment.

  6. Curious creatures: a multi-taxa investigation of responses to novelty in a zoo environment

    PubMed Central

    Melfi, Vicky; Burns, Alicia; McGill, David M.; Doyle, Rebecca E.

    2018-01-01

    The personality trait of curiosity has been shown to increase welfare in humans. If this positive welfare effect is also true for non-humans, animals with high levels of curiosity may be able to cope better with stressful situations than their conspecifics. Before discoveries can be made regarding the effect of curiosity on an animal’s ability to cope in their environment, a way of measuring curiosity across species in different environments must be created to standardise testing. To determine the suitability of novel objects in testing curiosity, species from different evolutionary backgrounds with sufficient sample sizes were chosen. Barbary sheep (Ammotragus lervia) n = 12, little penguins (Eudyptula minor) n = 10, ringtail lemurs (Lemur catta) n = 8, red tailed black cockatoos (Calyptorhynchus banksia) n = 7, Indian star tortoises (Geochelone elegans) n = 5 and red kangaroos (Macropus rufus) n = 5 were presented with a stationary object, a moving object and a mirror. Having objects with different characteristics increased the likelihood individuals would find at least one motivating. Conspecifics were all assessed simultaneously for time to first orientate towards object (s), latency to make contact (s), frequency of interactions, and total duration of interaction (s). Differences in curiosity were recorded in four of the six species; the Barbary sheep and red tailed black cockatoos did not interact with the novel objects suggesting either a low level of curiosity or that the objects were not motivating for these animals. Variation in curiosity was seen between and within species in terms of which objects they interacted with and how long they spent with the objects. This was determined by the speed in which they interacted, and the duration of interest. By using the measure of curiosity towards novel objects with varying characteristics across a range of zoo species, we can see evidence of evolutionary, husbandry and individual influences on their response. Further work to obtain data on multiple captive populations of a single species using a standardised method could uncover factors that nurture the development of curiosity. In doing so, it would be possible to isolate and modify sub-optimal husbandry practices to improve welfare in the zoo environment. PMID:29568703

  7. Bubbles, Bow Shocks and B Fields: The Interplay Between Neutron Stars and Their Environments

    NASA Astrophysics Data System (ADS)

    Gaensler, Bryan M.

    2006-12-01

    Young neutron stars embody Nature's extremes: they spin incredibly rapidly, move through space at enormous velocities, and are imbued with unimaginably strong magnetic fields. Since their progenitor stars do not have any of these characteristics, these properties are presumably all imparted to a neutron star during or shortly after the supernova explosion in which it is formed. This raises two fundamental questions: how do neutron stars attain these extreme parameters, and how are their vast reservoirs of energy then dissipated? I will explain how multi-wavelength observations of the environments of neutron stars not only provide vital forensic evidence on the physics of supernova core collapse, but also spectacularly reveal the winds, jets, shocks and outflows through which these remarkable objects couple to their surroundings.

  8. NASA's Hyper-X Program

    NASA Technical Reports Server (NTRS)

    Rausch, Vincent L.; McClinton, Charles R.; Sitz, Joel; Reukauf, Paul

    2000-01-01

    This paper provides an overview of the objectives and status of the Hyper-X program which is tailored to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment, the last stage preceding prototype development. The first Hyper-X research vehicle (HXRV), designated X-43, is being prepared at the Dryden Flight Research Center for flight at Mach 7 in the near future. In addition, the associated booster and vehicle-to-booster adapter are being prepared for flight and flight test preparations are well underway. Extensive risk reduction activities for the first flight and non-recurring design for the Mach 10 X-43 (3rd flight) are nearing completion. The Mach 7 flight of the X-43 will be the first flight of an airframe-integrated scramjet-powered vehicle.

  9. 3D shape measurement of moving object with FFT-based spatial matching

    NASA Astrophysics Data System (ADS)

    Guo, Qinghua; Ruan, Yuxi; Xi, Jiangtao; Song, Limei; Zhu, Xinjun; Yu, Yanguang; Tong, Jun

    2018-03-01

    This work presents a new technique for 3D shape measurement of moving object in translational motion, which finds applications in online inspection, quality control, etc. A low-complexity 1D fast Fourier transform (FFT)-based spatial matching approach is devised to obtain accurate object displacement estimates, and it is combined with single shot fringe pattern prolometry (FPP) techniques to achieve high measurement performance with multiple captured images through coherent combining. The proposed technique overcomes some limitations of existing ones. Specifically, the placement of marks on object surface and synchronization between projector and camera are not needed, the velocity of the moving object is not required to be constant, and there is no restriction on the movement trajectory. Both simulation and experimental results demonstrate the effectiveness of the proposed technique.

  10. Experimental evaluation of ballistic hazards in imaging diagnostic center.

    PubMed

    Karpowicz, Jolanta; Gryz, Krzysztof

    2013-04-01

    Serious hazards for human health and life and devices in close proximity to the magnetic resonance scanners (MRI scanners) include the effects of being hit by ferromagnetic objects attracted by static magnetic field (SMF) produced by scanner magnet - the so-called ballistic hazards classified among indirect electromagnetic hazards. International safety guidelines and technical literature specify different SMF threshold values regarding ballistic hazards - e.g. 3 mT (directive 2004/40/EC, EN 60601-2-33), and 30 mT (BMAS 2009, directive proposal 2011). Investigations presented in this article were performed in order to experimentally verify SMF threshold for ballistic hazards near MRI scanners used in Poland. Investigations were performed with the use of a laboratory source of SMF (0-30 mT) and MRI scanners of various types. The levels of SMF in which metal objects of various shapes and 0.4-500 g mass are moved by the field influence were investigated. The distance from the MRI scanners (0.2-3T) where hazards may occur were also investigated. Objects investigated under laboratory conditions were moved by SMF of 2.2-15 mT magnetic flux density when they were freely suspended, but were moved by the SMF of 5.6-22 mT when they were placed on a smooth surface. Investigated objects were moved in fields of 3.5-40 mT by MRI scanners. Distances from scanner magnet cover, where ballistic hazards might occur are: up to 0.5 m for 0.2-0.3T scanners; up to 1.3 m for 0.5T scanners; up to 2.0 m for 1.5T scanners and up to 2.5 m for 3T scanners (at the front and back of the magnet). It was shown that SMF of 3 mT magnetic flux density should be taken as the threshold for ballistic hazards. Such level is compatible with SMF limit value regarding occupational safety and health-protected areas/zones, where according to the Polish labor law the procedures of work environment inspection and prevention measures regarding indirect electromagnetic hazards should be applied. Presented results do not support the increase up to 30 mT of the SMF limit for protected area.

  11. Residential mobility and the association between physical environment disadvantage and general and mental health.

    PubMed

    Tunstall, H; Pearce, J R; Shortt, N K; Mitchell, R J

    2015-12-01

    Selective migration may influence the association between physical environments and health. This analysis assessed whether residential mobility concentrates people with poor health in neighbourhoods of the UK with disadvantaged physical environments. Data were from the British Household Panel Survey. Moves were over 1 year between adjacent survey waves, pooled over 10 pairs of waves, 1996-2006. Health outcomes were self-reported poor general health and mental health problems. Neighbourhood physical environment was defined using the Multiple Environmental Deprivation Index (MEDIx) for wards. Logistic regression analysis compared risk of poor health in MEDIx categories before and after moves. Analyses were stratified by age groups 18-29, 30-44, 45-59 and 60+ years and adjusted for age, sex, marital status, household type, housing tenure, education and social class. The pooled data contained 122 570 observations. 8.5% moved between survey waves but just 3.0% changed their MEDIx category. In all age groups odds ratios for poor general and mental health were not significantly increased in the most environmentally deprived neighbourhoods following moves. Over a 1-year time period residential moves between environments with different levels of multiple physical deprivation were rare and did not significantly raise rates of poor health in the most deprived areas. © The Author 2014. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Error analysis of motion correction method for laser scanning of moving objects

    NASA Astrophysics Data System (ADS)

    Goel, S.; Lohani, B.

    2014-05-01

    The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.

  13. Nurses' role transition from the clinical ward environment to the critical care environment.

    PubMed

    Gohery, Patricia; Meaney, Teresa

    2013-12-01

    To explore the experiences of nurses moving from the ward environment to the critical care environment. Critical care areas are employing nurses with no critical care experience due to staff shortage. There is a paucity of literature focusing on the experiences of nurses moving from the ward environment to the critical care environment. A Heideggerian phenomenology research approach was used in this study. In-depth semi structured interviews, supported with an interview guide, were conducted with nine critical care nurses. Data analysis was guided by Van Manen (1990) approach to phenomenological analysis. Four main themes emerged: The highs and lows, you need support, theory-practice gap, struggling with fear. The participants felt ill prepared and inexperienced to work within the stressful and technical environment of critical care due to insufficient education and support. The study findings indicated that a variety of feelings and emotions are experienced by ward nurses who move into the stressful and technical environment of critical care due to insufficient skills and knowledge. More education and support is required to improve this transition process. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Some characteristics of optokinetic eye-movement patterns : a comparative study.

    DOT National Transportation Integrated Search

    1970-07-01

    Long-associated with transportation ('railroad nystagmus'), optokinetic (OPK) nystagmus is an eye-movement reaction which occurs when a series of moving objects crosses the visual field or when an observer moves past a series of objects. Similar cont...

  15. A-Track: A new approach for detection of moving objects in FITS images

    NASA Astrophysics Data System (ADS)

    Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.

    2016-10-01

    We have developed a fast, open-source, cross-platform pipeline, called A-Track, for detecting the moving objects (asteroids and comets) in sequential telescope images in FITS format. The pipeline is coded in Python 3. The moving objects are detected using a modified line detection algorithm, called MILD. We tested the pipeline on astronomical data acquired by an SI-1100 CCD with a 1-meter telescope. We found that A-Track performs very well in terms of detection efficiency, stability, and processing time. The code is hosted on GitHub under the GNU GPL v3 license.

  16. Effects of sport expertise on representational momentum during timing control.

    PubMed

    Nakamoto, Hiroki; Mori, Shiro; Ikudome, Sachi; Unenaka, Satoshi; Imanaka, Kuniyasu

    2015-04-01

    Sports involving fast visual perception require players to compensate for delays in neural processing of visual information. Memory for the final position of a moving object is distorted forward along its path of motion (i.e., "representational momentum," RM). This cognitive extrapolation of visual perception might compensate for the neural delay in interacting appropriately with a moving object. The present study examined whether experienced batters cognitively extrapolate the location of a fast-moving object and whether this extrapolation is associated with coincident timing control. Nine expert and nine novice baseball players performed a prediction motion task in which a target moved from one end of a straight 400-cm track at a constant velocity. In half of the trials, vision was suddenly occluded when the target reached the 200-cm point (occlusion condition). Participants had to press a button concurrently with the target arrival at the end of the track and verbally report their subjective assessment of the first target-occluded position. Experts showed larger RM magnitude (cognitive extrapolation) than did novices in the occlusion condition. RM magnitude and timing errors were strongly correlated in the fast velocity condition in both experts and novices, whereas in the slow velocity condition, a significant correlation appeared only in experts. This suggests that experts can cognitively extrapolate the location of a moving object according to their anticipation and, as a result, potentially circumvent neural processing delays. This process might be used to control response timing when interacting with moving objects.

  17. Static latching arrangement and method

    DOEpatents

    Morrison, Larry

    1988-01-01

    A latching assembly for use in latching a cable to and unlatching it from a given object in order to move an object from one location to another is disclosed herein. This assembly includes a weighted sphere mounted to one end of a cable so as to rotate about a specific diameter of the sphere. The assembly also includes a static latch adapted for connection with the object to be moved. This latch includes an internal latching cavity for containing the sphere in a latching condition and a series of surfaces and openings which cooperate with the sphere in order to move the sphere into and out of the latching cavity and thereby connect the cable to and disconnect it from the latch without using any moving parts on the latch itself.

  18. Spatial Updating of Environments Described in Texts

    ERIC Educational Resources Information Center

    Avraamides, Marios N.

    2003-01-01

    People update egocentric spatial relations in an effortless and on-line manner when they move in the environment, but not when they only imagine themselves moving. In contrast to previous studies, the present experiments examined egocentric updating with spatial scenes that were encoded linguistically instead of perceived directly. Experiment 1…

  19. Inattentional blindness is influenced by exposure time not motion speed.

    PubMed

    Kreitz, Carina; Furley, Philip; Memmert, Daniel

    2016-01-01

    Inattentional blindness is a striking phenomenon in which a salient object within the visual field goes unnoticed because it is unexpected, and attention is focused elsewhere. Several attributes of the unexpected object, such as size and animacy, have been shown to influence the probability of inattentional blindness. At present it is unclear whether or how the speed of a moving unexpected object influences inattentional blindness. We demonstrated that inattentional blindness rates are considerably lower if the unexpected object moves more slowly, suggesting that it is the mere exposure time of the object rather than a higher saliency potentially induced by higher speed that determines the likelihood of its detection. Alternative explanations could be ruled out: The effect is not based on a pop-out effect arising from different motion speeds in relation to the primary-task stimuli (Experiment 2), nor is it based on a higher saliency of slow-moving unexpected objects (Experiment 3).

  20. Upside-down: Perceived space affects object-based attention.

    PubMed

    Papenmeier, Frank; Meyerhoff, Hauke S; Brockhoff, Alisa; Jahn, Georg; Huff, Markus

    2017-07-01

    Object-based attention influences the subjective metrics of surrounding space. However, does perceived space influence object-based attention, as well? We used an attentive tracking task that required sustained object-based attention while objects moved within a tracking space. We manipulated perceived space through the availability of depth cues and varied the orientation of the tracking space. When rich depth cues were available (appearance of a voluminous tracking space), the upside-down orientation of the tracking space (objects appeared to move high on a ceiling) caused a pronounced impairment of tracking performance compared with an upright orientation of the tracking space (objects appeared to move on a floor plane). In contrast, this was not the case when reduced depth cues were available (appearance of a flat tracking space). With a preregistered second experiment, we showed that those effects were driven by scene-based depth cues and not object-based depth cues. We conclude that perceived space affects object-based attention and that object-based attention and perceived space are closely interlinked. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Constraints on Multiple Object Tracking in Williams Syndrome: How Atypical Development Can Inform Theories of Visual Processing

    ERIC Educational Resources Information Center

    Ferrara, Katrina; Hoffman, James E.; O'Hearn, Kirsten; Landau, Barbara

    2016-01-01

    The ability to track moving objects is a crucial skill for performance in everyday spatial tasks. The tracking mechanism depends on representation of moving items as coherent entities, which follow the spatiotemporal constraints of objects in the world. In the present experiment, participants tracked 1 to 4 targets in a display of 8 identical…

  2. Interactive Sound Propagation using Precomputation and Statistical Approximations

    NASA Astrophysics Data System (ADS)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  3. Controlling the Growth of Future LEO Debris Populations with Active Debris Removal

    NASA Technical Reports Server (NTRS)

    Liou, J.-C.; Johnson, N. L.; Hill, N. M.

    2008-01-01

    Active debris removal (ADR) was suggested as a potential means to remediate the low Earth orbit (LEO) debris environment as early as the 1980s. The reasons ADR has not become practical are due to its technical difficulties and the high cost associated with the approach. However, as the LEO debris populations continue to increase, ADR may be the only option to preserve the near-Earth environment for future generations. An initial study was completed in 2007 to demonstrate that a simple ADR target selection criterion could be developed to reduce the future debris population growth. The present paper summarizes a comprehensive study based on more realistic simulation scenarios, including fragments generated from the 2007 Fengyun-1C event, mitigation measures, and other target selection options. The simulations were based on the NASA long-term orbital debris projection model, LEGEND. A scenario, where at the end of mission lifetimes, spacecraft and upper stages were moved to 25-year decay orbits, was adopted as the baseline environment for comparison. Different annual removal rates and different ADR target selection criteria were tested, and the resulting 200-year future environment projections were compared with the baseline scenario. Results of this parametric study indicate that (1) an effective removal strategy can be developed based on the mass and collision probability of each object as the selection criterion, and (2) the LEO environment can be stabilized in the next 200 years with an ADR removal rate of five objects per year.

  4. 3D morphology reconstruction using linear array CCD binocular stereo vision imaging system

    NASA Astrophysics Data System (ADS)

    Pan, Yu; Wang, Jinjiang

    2018-01-01

    Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.

  5. Static force field representation of environments based on agents' nonlinear motions

    NASA Astrophysics Data System (ADS)

    Campo, Damian; Betancourt, Alejandro; Marcenaro, Lucio; Regazzoni, Carlo

    2017-12-01

    This paper presents a methodology that aims at the incremental representation of areas inside environments in terms of attractive forces. It is proposed a parametric representation of velocity fields ruling the dynamics of moving agents. It is assumed that attractive spots in the environment are responsible for modifying the motion of agents. A switching model is used to describe near and far velocity fields, which in turn are used to learn attractive characteristics of environments. The effect of such areas is considered radial over all the scene. Based on the estimation of attractive areas, a map that describes their effects in terms of their localizations, ranges of action, and intensities is derived in an online way. Information of static attractive areas is added dynamically into a set of filters that describes possible interactions between moving agents and an environment. The proposed approach is first evaluated on synthetic data; posteriorly, the method is applied on real trajectories coming from moving pedestrians in an indoor environment.

  6. Cognitive, perceptual and action-oriented representations of falling objects.

    PubMed

    Zago, Myrka; Lacquaniti, Francesco

    2005-01-01

    We interact daily with moving objects. How accurate are our predictions about objects' motions? What sources of information do we use? These questions have received wide attention from a variety of different viewpoints. On one end of the spectrum are the ecological approaches assuming that all the information about the visual environment is present in the optic array, with no need to postulate conscious or unconscious representations. On the other end of the spectrum are the constructivist approaches assuming that a more or less accurate representation of the external world is built in the brain using explicit or implicit knowledge or memory besides sensory inputs. Representations can be related to naive physics or to context cue-heuristics or to the construction of internal copies of environmental invariants. We address the issue of prediction of objects' fall at different levels. Cognitive understanding and perceptual judgment of simple Newtonian dynamics can be surprisingly inaccurate. By contrast, motor interactions with falling objects are often very accurate. We argue that the pragmatic action-oriented behaviour and the perception-oriented behaviour may use different modes of operation and different levels of representation.

  7. Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus

    NASA Astrophysics Data System (ADS)

    Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.

    2014-09-01

    There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is used to assess the performance of the algorithm. In the second application, a visible imager operated in sidereal mode observes geostationary objects as moving, stars as fixed except for field rotation, and non-geostationary objects as drifting. RANSAC-MT is used to detect the drifter. In this set of data, the drifting space object was detected at a distance of 13800 km. The AFRL/RH set of data, collected in the stare mode, contained the signature of two geostationary satellites. The signature of a moving object was simulated and added to the sequence of frames to determine the sensitivity in magnitude. The performance compares well with the more intensive TBD algorithms reported in the literature.

  8. A densitometric analysis of IIaO film flown aboard the space shuttle transportation system STS #3, 7, and 8

    NASA Technical Reports Server (NTRS)

    Hammond, Ernest C., Jr.

    1989-01-01

    Since the United States of America is moving into an age of reusable space vehicles, both electronic and photographic materials will continue to be an integral part of the recording techniques available. Film as a scientifically viable recording technique in astronomy is well documented. There is a real need to expose various types of films to the Shuttle environment. Thus, the main objective was to look at the subtle densitometric changes of canisters of IIaO film that was placed aboard the Space Shuttle 3 (STS-3).

  9. Comparative study on collaborative interaction in non-immersive and immersive systems

    NASA Astrophysics Data System (ADS)

    Shahab, Qonita M.; Kwon, Yong-Moo; Ko, Heedong; Mayangsari, Maria N.; Yamasaki, Shoko; Nishino, Hiroaki

    2007-09-01

    This research studies the Virtual Reality simulation for collaborative interaction so that different people from different places can interact with one object concurrently. Our focus is the real-time handling of inputs from multiple users, where object's behavior is determined by the combination of the multiple inputs. Issues addressed in this research are: 1) The effects of using haptics on a collaborative interaction, 2) The possibilities of collaboration between users from different environments. We conducted user tests on our system in several cases: 1) Comparison between non-haptics and haptics collaborative interaction over LAN, 2) Comparison between non-haptics and haptics collaborative interaction over Internet, and 3) Analysis of collaborative interaction between non-immersive and immersive display environments. The case studies are the interaction of users in two cases: collaborative authoring of a 3D model by two users, and collaborative haptic interaction by multiple users. In Virtual Dollhouse, users can observe physics law while constructing a dollhouse using existing building blocks, under gravity effects. In Virtual Stretcher, multiple users can collaborate on moving a stretcher together while feeling each other's haptic motions.

  10. Motion video analysis using planar parallax

    NASA Astrophysics Data System (ADS)

    Sawhney, Harpreet S.

    1994-04-01

    Motion and structure analysis in video sequences can lead to efficient descriptions of objects and their motions. Interesting events in videos can be detected using such an analysis--for instance independent object motion when the camera itself is moving, figure-ground segregation based on the saliency of a structure compared to its surroundings. In this paper we present a method for 3D motion and structure analysis that uses a planar surface in the environment as a reference coordinate system to describe a video sequence. The motion in the video sequence is described as the motion of the reference plane, and the parallax motion of all the non-planar components of the scene. It is shown how this method simplifies the otherwise hard general 3D motion analysis problem. In addition, a natural coordinate system in the environment is used to describe the scene which can simplify motion based segmentation. This work is a part of an ongoing effort in our group towards video annotation and analysis for indexing and retrieval. Results from a demonstration system being developed are presented.

  11. Multiphysics Object Oriented Simulation Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Multiphysics Object Oriented Simulation Environment (MOOSE) software library developed at Idaho National Laboratory is a tool. MOOSE, like other tools, doesn't actually complete a task. Instead, MOOSE seeks to reduce the effort required to create engineering simulation applications. MOOSE itself is a software library: a blank canvas upon which you write equations and then MOOSE can help you solve them. MOOSE is comparable to a spreadsheet application. A spreadsheet, by itself, doesn't do anything. Only once equations are entered into it will a spreadsheet application compute anything. Such is the same for MOOSE. An engineer or scientist can utilizemore » the equation solvers within MOOSE to solve equations related to their area of study. For instance, a geomechanical scientist can input equations related to water flow in underground reservoirs and MOOSE can solve those equations to give the scientist an idea of how water could move over time. An engineer might input equations related to the forces in steel beams in order to understand the load bearing capacity of a bridge. Because MOOSE is a blank canvas it can be useful in many scientific and engineering pursuits.« less

  12. A binary motor imagery tasks based brain-computer interface for two-dimensional movement control

    NASA Astrophysics Data System (ADS)

    Xia, Bin; Cao, Lei; Maysam, Oladazimi; Li, Jie; Xie, Hong; Su, Caixia; Birbaumer, Niels

    2017-12-01

    Objective. Two-dimensional movement control is a popular issue in brain-computer interface (BCI) research and has many applications in the real world. In this paper, we introduce a combined control strategy to a binary class-based BCI system that allows the user to move a cursor in a two-dimensional (2D) plane. Users focus on a single moving vector to control 2D movement instead of controlling vertical and horizontal movement separately. Approach. Five participants took part in a fixed-target experiment and random-target experiment to verify the effectiveness of the combination control strategy under the fixed and random routine conditions. Both experiments were performed in a virtual 2D dimensional environment and visual feedback was provided on the screen. Main results. The five participants achieved an average hit rate of 98.9% and 99.4% for the fixed-target experiment and the random-target experiment, respectively. Significance. The results demonstrate that participants could move the cursor in the 2D plane effectively. The proposed control strategy is based only on a basic two-motor imagery BCI, which enables more people to use it in real-life applications.

  13. Conceptual design of a Moving Belt Radiator (MBR) shuttle-attached experiment

    NASA Technical Reports Server (NTRS)

    Aguilar, Jerry L.

    1990-01-01

    The conceptual design of a shuttle-attached Moving Belt Radiator (MBR) experiment is presented. The MBR is an advanced radiator concept in which a rotating belt is used to radiate thermal energy to space. The experiment is developed with the primary focus being the verification of the dynamic characteristics of a rotating belt with a secondary objective of proving the thermal and sealing aspects in a reduced gravity, vacuum environment. The mechanical design, selection of the belt material and working fluid, a preliminary test plan, and program plan are presented. The strategy used for selecting the basic sizes and materials of the components are discussed. Shuttle and crew member requirements are presented with some options for increasing or decreasing the demands on the STS. An STS carrier and the criteria used in the selection process are presented. The proposed carrier for the Moving Belt Radiator experiment is the Hitchhiker-M. Safety issues are also listed with possible results. This experiment is designed so that a belt can be deployed, run at steady state conditions, run with dynamic perturbations imposed, verify the operation of the interface heat exchanger and seals, and finally be retracted into a stowed position for transport back to earth.

  14. Optimizing a neural network for detection of moving vehicles in video

    NASA Astrophysics Data System (ADS)

    Fischer, Noëlle M.; Kruithof, Maarten C.; Bouma, Henri

    2017-10-01

    In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing, trafficking in human beings and smuggling. Many recent benchmarks have shown that convolutional neural networks are performing well in the detection of objects in images. Most deep-learning research effort focuses on classification or detection on single images. However, the detection of dynamic changes (e.g., moving objects, actions and events) in streaming video is extremely relevant for surveillance and forensic applications. In this paper, we combine an end-to-end feedforward neural network for static detection with a recurrent Long Short-Term Memory (LSTM) network for multi-frame analysis. We present a practical guide with special attention to the selection of the optimizer and batch size. The end-to-end network is able to localize and recognize the vehicles in video from traffic cameras. We show an efficient way to collect relevant in-domain data for training with minimal manual labor. Our results show that the combination with LSTM improves performance for the detection of moving vehicles.

  15. Facilitating peer based learning through summative assessment - An adaptation of the Objective Structured Clinical Assessment tool for the blended learning environment.

    PubMed

    Wikander, Lolita; Bouchoucha, Stéphane L

    2018-01-01

    Adapting a course from face to face to blended delivery necessitates that assessments are modified accordingly. In Australia the Objective Structured Clinical Assessment tool, as a derivative from the Objective Structured Clinical Examination, has been used in the face-to-face delivery mode as a formative or summative assessment tool in medicine and nursing since 1990. The Objective Structured Clinical Assessment has been used at Charles Darwin University to assess nursing students' simulated clinical skills prior to the commencement of their clinical placements since 2008. Although the majority of the course is delivered online, students attend a one-week intensive clinical simulation block yearly, prior to attending clinical placements. Initially, the Objective Structured Clinical Assessment was introduced as a lecturer assessed summative assessment, over time it was adapted to better suit the blended learning environment. The modification of the tool from an academic to peer assessed assessment tool, was based on the empirical literature, student feedback and a cross-sectional, qualitative study exploring academics' perceptions of the Objective Structured Clinical Assessment (Bouchoucha et al., 2013a, b). This paper presents an overview of the process leading to the successful adaptation of the Objective Structured Clinical Assessment to suit the requirements of a preregistration nursing course delivered through blended learning. This is significant as many universities are moving their curriculum to fully online or blended delivery, yet little attention has been paid to adapting the assessment of simulated clinical skills. The aim is to identify the benefits and drawbacks of using the peer assessed Objective Structured Clinical Assessment and share recommendations for successful implementation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  17. Multifocal planes head-mounted displays.

    PubMed

    Rolland, J P; Krueger, M W; Goon, A

    2000-07-01

    Stereoscopic head-mounted displays (HMD's) provide an effective capability to create dynamic virtual environments. For a user of such environments, virtual objects would be displayed ideally at the appropriate distances, and natural concordant accommodation and convergence would be provided. Under such image display conditions, the user perceives these objects as if they were objects in a real environment. Current HMD technology requires convergent eye movements. However, it is currently limited by fixed visual accommodation, which is inconsistent with real-world vision. A prototype multiplanar volumetric projection display based on a stack of laminated planes was built for medical visualization as discussed in a paper presented at a 1999 Advanced Research Projects Agency workshop (Sullivan, Advanced Research Projects Agency, Arlington, Va., 1999). We show how such technology can be engineered to create a set of virtual planes appropriately configured in visual space to suppress conflicts of convergence and accommodation in HMD's. Although some scanning mechanism could be employed to create a set of desirable planes from a two-dimensional conventional display, multiplanar technology accomplishes such function with no moving parts. Based on optical principles and human vision, we present a comprehensive investigation of the engineering specification of multiplanar technology for integration in HMD's. Using selected human visual acuity and stereoacuity criteria, we show that the display requires at most 27 equally spaced planes, which is within the capability of current research and development display devices, located within a maximal 26-mm-wide stack. We further show that the necessary in-plane resolution is of the order of 5 microm.

  18. Effects of Head Up Display Symbology Lag on Recovery from Inadvertent Instrument Meteorological Conditions: Performance Costs

    DTIC Science & Technology

    2006-06-01

    decrements in system fidelity or mismatches in the constructed environment when it does not closely approximate the real environment (Hix and Gabbard , 2002...using moving-aircraft and moving-horizon attitude indicators. Washington, DC: FAA, Technical Report FAA-AM-73-9. Hix D., and Gabbard , J.L. 2002

  19. Degree of Hybridity: Peer Review in the Blended Composition Classroom

    ERIC Educational Resources Information Center

    Middlebrook, Rebecca Helminen

    2013-01-01

    As the move to increase availability of composition courses in the online environment continues, it is important to understand the ways in which composition instructors take on the challenges associated with moving their teaching online and how they modify, or re-mediate, their pedagogy for this new teaching and learning environment. By…

  20. Moving an In-Class Module Online: A Case Study for Chemistry

    ERIC Educational Resources Information Center

    Seery, Michael K.

    2012-01-01

    This article summarises the author's experiences in running a module "Computers for Chemistry" entirely online for the past four years. The module, previously taught in a face-to-face environment, was reconfigured for teaching in an online environment. The rationale for moving online along with the design, implementation and evaluation of the…

  1. Lateralized Effects of Categorical and Coordinate Spatial Processing of Component Parts on the Recognition of 3D Non-Nameable Objects

    ERIC Educational Resources Information Center

    Saneyoshi, Ayako; Michimata, Chikashi

    2009-01-01

    Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to…

  2. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.

    PubMed

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-02-03

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.

  3. Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor

    PubMed Central

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung

    2018-01-01

    A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods. PMID:29401681

  4. Virtual Labs and Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Boehler, Ted

    2006-12-01

    Virtual Labs and Virtual Worlds Coastline Community College has under development several virtual lab simulations and activities that range from biology, to language labs, to virtual discussion environments. Imagine a virtual world that students enter online, by logging onto their computer from home or anywhere they have web access. Upon entering this world they select a personalized identity represented by a digitized character (avatar) that can freely move about, interact with the environment, and communicate with other characters. In these virtual worlds, buildings, gathering places, conference rooms, labs, science rooms, and a variety of other “real world” elements are evident. When characters move about and encounter other people (players) they may freely communicate. They can examine things, manipulate objects, read signs, watch video clips, hear sounds, and jump to other locations. Goals of critical thinking, social interaction, peer collaboration, group support, and enhanced learning can be achieved in surprising new ways with this innovative approach to peer-to-peer communication in a virtual discussion world. In this presentation, short demos will be given of several online learning environments including a virtual biology lab, a marine science module, a Spanish lab, and a virtual discussion world. Coastline College has been a leader in the development of distance learning and media-based education for nearly 30 years and currently offers courses through PDA, Internet, DVD, CD-ROM, TV, and Videoconferencing technologies. Its distance learning program serves over 20,000 students every year. sponsor Jerry Meisner

  5. Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR

    NASA Astrophysics Data System (ADS)

    Sroka, Adam; Chan, Susan; Warburton, Ryan; Gariepy, Genevieve; Henderson, Robert; Leach, Jonathan; Faccio, Daniele; Lee, Stephen T.

    2016-05-01

    The ability to detect motion and to track a moving object that is hidden around a corner or behind a wall provides a crucial advantage when physically going around the obstacle is impossible or dangerous. One recently demonstrated approach to achieving this goal makes use of non-line-of-sight picosecond pulse laser ranging. This approach has recently become interesting due to the availability of single-photon avalanche diode (SPAD) receivers with picosecond time resolution. We present a time-resolved non-sequential ray-tracing model and its application to indirect line-of-sight detection of moving targets. The model makes use of the Zemax optical design programme's capabilities in stray light analysis where it traces large numbers of rays through multiple random scattering events in a 3D non-sequential environment. Our model then reconstructs the generated multi-segment ray paths and adds temporal analysis. Validation of this model against experimental results is shown. We then exercise the model to explore the limits placed on system design by available laser sources and detectors. In particular we detail the requirements on the laser's pulse energy, duration and repetition rate, and on the receiver's temporal response and sensitivity. These are discussed in terms of the resulting implications for achievable range, resolution and measurement time while retaining eye-safety with this technique. Finally, the model is used to examine potential extensions to the experimental system that may allow for increased localisation of the position of the detected moving object, such as the inclusion of multiple detectors and/or multiple emitters.

  6. Interaction of compass sensing and object-motion detection in the locust central complex.

    PubMed

    Bockhorst, Tobias; Homberg, Uwe

    2017-07-01

    Goal-directed behavior is often complicated by unpredictable events, such as the appearance of a predator during directed locomotion. This situation requires adaptive responses like evasive maneuvers followed by subsequent reorientation and course correction. Here we study the possible neural underpinnings of such a situation in an insect, the desert locust. As in other insects, its sense of spatial orientation strongly relies on the central complex, a group of midline brain neuropils. The central complex houses sky compass cells that signal the polarization plane of skylight and thus indicate the animal's steering direction relative to the sun. Most of these cells additionally respond to small moving objects that drive fast sensory-motor circuits for escape. Here we investigate how the presentation of a moving object influences activity of the neurons during compass signaling. Cells responded in one of two ways: in some neurons, responses to the moving object were simply added to the compass response that had adapted during continuous stimulation by stationary polarized light. By contrast, other neurons disadapted, i.e., regained their full compass response to polarized light, when a moving object was presented. We propose that the latter case could help to prepare for reorientation of the animal after escape. A neuronal network based on central-complex architecture can explain both responses by slight changes in the dynamics and amplitudes of adaptation to polarized light in CL columnar input neurons of the system. NEW & NOTEWORTHY Neurons of the central complex in several insects signal compass directions through sensitivity to the sky polarization pattern. In locusts, these neurons also respond to moving objects. We show here that during polarized-light presentation, responses to moving objects override their compass signaling or restore adapted inhibitory as well as excitatory compass responses. A network model is presented to explain the variations of these responses that likely serve to redirect flight or walking following evasive maneuvers. Copyright © 2017 the American Physiological Society.

  7. Apparent motion perception in lower limb amputees with phantom sensations: "obstacle shunning" and "obstacle tolerance".

    PubMed

    Saetta, Gianluca; Grond, Ilva; Brugger, Peter; Lenggenhager, Bigna; Tsay, Anthony J; Giummarra, Melita J

    2018-03-21

    Phantom limbs are the phenomenal persistence of postural and sensorimotor features of an amputated limb. Although immaterial, their characteristics can be modulated by the presence of physical matter. For instance, the phantom may disappear when its phenomenal space is invaded by objects ("obstacle shunning"). Alternatively, "obstacle tolerance" occurs when the phantom is not limited by the law of impenetrability and co-exists with physical objects. Here we examined the link between this under-investigated aspect of phantom limbs and apparent motion perception. The illusion of apparent motion of human limbs involves the perception that a limb moves through or around an object, depending on the stimulus onset asynchrony (SOA) for the two images. Participants included 12 unilateral lower limb amputees matched for obstacle shunning (n = 6) and obstacle tolerance (n = 6) experiences, and 14 non-amputees. Using multilevel linear models, we replicated robust biases for short perceived trajectories for short SOA (moving through the object), and long trajectories (circumventing the object) for long SOAs in both groups. Importantly, however, amputees with obstacle shunning perceived leg stimuli to predominantly move through the object, whereas amputees with obstacle tolerance perceived leg stimuli to predominantly move around the object. That is, in people who experience obstacle shunning, apparent motion perception of lower limbs was not constrained to the laws of impenetrability (as the phantom disappears when invaded by objects), and legs can therefore move through physical objects. Amputees who experience obstacle tolerance, however, had stronger solidity constraints for lower limb apparent motion, perhaps because they must avoid co-location of the phantom with physical objects. Phantom limb experience does, therefore, appear to be modulated by intuitive physics, but not in the same way for everyone. This may have important implications for limb experience post-amputation (e.g., improving prosthesis embodiment when limb representation is constrained by the same limits as an intact limb). Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. What are the underlying units of perceived animacy? Chasing detection is intrinsically object-based.

    PubMed

    van Buren, Benjamin; Gao, Tao; Scholl, Brian J

    2017-10-01

    One of the most foundational questions that can be asked about any visual process is the nature of the underlying 'units' over which it operates (e.g., features, objects, or spatial regions). Here we address this question-for the first time, to our knowledge-in the context of the perception of animacy. Even simple geometric shapes appear animate when they move in certain ways. Do such percepts arise whenever any visual feature moves appropriately, or do they require that the relevant features first be individuated as discrete objects? Observers viewed displays in which one disc (the "wolf") chased another (the "sheep") among several moving distractor discs. Critically, two pairs of discs were also connected by visible lines. In the Unconnected condition, both lines connected pairs of distractors; but in the Connected condition, one connected the wolf to a distractor, and the other connected the sheep to a different distractor. Observers in the Connected condition were much less likely to describe such displays using mental state terms. Furthermore, signal detection analyses were used to explore the objective ability to discriminate chasing displays from inanimate control displays in which the wolf moved toward the sheep's mirror-image. Chasing detection was severely impaired on Connected trials: observers could readily detect an object chasing another object, but not a line-end chasing another line-end, a line-end chasing an object, or an object chasing a line-end. We conclude that the underlying units of perceived animacy are discrete visual objects.

  9. Moving object detection in top-view aerial videos improved by image stacking

    NASA Astrophysics Data System (ADS)

    Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen

    2017-08-01

    Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.

  10. Implementation of the SMART MOVE intervention in primary care: a qualitative study using normalisation process theory.

    PubMed

    Glynn, Liam G; Glynn, Fergus; Casey, Monica; Wilkinson, Louise Gaffney; Hayes, Patrick S; Heaney, David; Murphy, Andrew W M

    2018-05-02

    Problematic translational gaps continue to exist between demonstrating the positive impact of healthcare interventions in research settings and their implementation into routine daily practice. The aim of this qualitative evaluation of the SMART MOVE trial was to conduct a theoretically informed analysis, using normalisation process theory, of the potential barriers and levers to the implementation of a mhealth intervention to promote physical activity in primary care. The study took place in the West of Ireland with recruitment in the community from the Clare Primary Care Network. SMART MOVE trial participants and the staff from four primary care centres were invited to take part and all agreed to do so. A qualitative methodology with a combination of focus groups (general practitioners, practice nurses and non-clinical staff from four separate primary care centres, n = 14) and individual semi-structured interviews (intervention and control SMART MOVE trial participants, n = 4) with purposeful sampling utilising the principles of Framework Analysis was utilised. The Normalisation Process Theory was used to develop the topic guide for the interviews and also informed the data analysis process. Four themes emerged from the analysis: personal and professional exercise strategies; roles and responsibilities to support active engagement; utilisation challenges; and evaluation, adoption and adherence. It was evident that introducing a new healthcare intervention demands a comprehensive evaluation of the intervention itself and also the environment in which it is to operate. Despite certain obstacles, the opportunity exists for the successful implementation of a novel healthcare intervention that addresses a hitherto unresolved healthcare need, provided that the intervention has strong usability attributes for both disseminators and target users and coheres strongly with the core objectives and culture of the health care environment in which it is to operate. We carried out a theoretical analysis of stakeholder informed barriers and levers to the implementation of a novel exercise promotion tool in the Irish primary care setting. We believe that this process amplifies the implementation potential of such an intervention in primary care. The SMART MOVE trial is registered at Current Controlled Trials (ISRCTN99944116; Date of registration: 1st August 2012).

  11. Camouflage, detection and identification of moving targets

    PubMed Central

    Hall, Joanna R.; Cuthill, Innes C.; Baddeley, Roland; Shohet, Adam J.; Scott-Samuel, Nicholas E.

    2013-01-01

    Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation—detection, identification and capture—in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely ‘break’ camouflage. PMID:23486439

  12. Camouflage, detection and identification of moving targets.

    PubMed

    Hall, Joanna R; Cuthill, Innes C; Baddeley, Roland; Shohet, Adam J; Scott-Samuel, Nicholas E

    2013-05-07

    Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation-detection, identification and capture-in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely 'break' camouflage.

  13. Linear encoding device

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    1993-01-01

    A Linear Motion Encoding device for measuring the linear motion of a moving object is disclosed in which a light source is mounted on the moving object and a position sensitive detector such as an array photodetector is mounted on a nearby stationary object. The light source emits a light beam directed towards the array photodetector such that a light spot is created on the array. An analog-to-digital converter, connected to the array photodetector is used for reading the position of the spot on the array photodetector. A microprocessor and memory is connected to the analog-to-digital converter to hold and manipulate data provided by the analog-to-digital converter on the position of the spot and to compute the linear displacement of the moving object based upon the data from the analog-to-digital converter.

  14. Image analysis of multiple moving wood pieces in real time

    NASA Astrophysics Data System (ADS)

    Wang, Weixing

    2006-02-01

    This paper presents algorithms for image processing and image analysis of wood piece materials. The algorithms were designed for auto-detection of wood piece materials on a moving conveyor belt or a truck. When wood objects on moving, the hard task is to trace the contours of the objects in n optimal way. To make the algorithms work efficiently in the plant, a flexible online system was designed and developed, which mainly consists of image acquisition, image processing, object delineation and analysis. A number of newly-developed algorithms can delineate wood objects with high accuracy and high speed, and in the wood piece analysis part, each wood piece can be characterized by a number of visual parameters which can also be used for constructing experimental models directly in the system.

  15. The Postural Responses to a Moving Environment of Adults Who Are Blind

    ERIC Educational Resources Information Center

    Stoffregen, Thomas A.; Ito, Kiyohide; Hove, Philip; Yank, Jane Redfield; Bardy, Benoit G.

    2010-01-01

    Adults who are blind stood in a room that could be moved around them. A sound source moved with the room, simulating the acoustic consequences of body sway. Body sway was greater when the room moved than when it was stationary, suggesting that sound may have been used to control stance. (Contains 1 figure.)

  16. Engineering and Technology Challenges for Active Debris Removal

    NASA Technical Reports Server (NTRS)

    Liou, Jer-Chyi

    2011-01-01

    After more than fifty years of space activities, the near-Earth environment is polluted with man-made orbital debris. The collision between Cosmos 2251 and the operational Iridium 33 in 2009 signaled a potential collision cascade effect, also known as the "Kessler Syndrome", in the environment. Various modelling studies have suggested that the commonly-adopted mitigation measures will not be sufficient to stabilize the future debris population. Active debris removal must be considered to remediate the environment. This paper summarizes the key issues associated with debris removal and describes the technology and engineering challenges to move forward. Fifty-four years after the launch of Sputnik 1, satellites have become an integral part of human society. Unfortunately, the ongoing space activities have left behind an undesirable byproduct orbital debris. This environment problem is threatening the current and future space activities. On average, two Shuttle window panels are replaced after every mission due to damage by micrometeoroid or orbital debris impacts. More than 100 collision avoidance maneuvers were conducted by satellite operators in 2010 to reduce the impact risks of their satellites with respect to objects in the U.S. Space Surveillance Network (SSN) catalog. Of the four known accident collisions between objects in the SSN catalog, the last one, collision between Cosmos 2251 and the operational Iridium 33 in 2009, was the most significant. It was the first ever accidental catastrophic destruction of an operational satellite by another satellite. It also signaled the potential collision cascade effect in the environment, commonly known as the "Kessler Syndrome," predicted by Kessler and Cour-Palais in 1978 [1]. Figure 1 shows the historical increase of objects in the SSN catalog. The majority of the catalog objects are 10 cm and larger. As of April 2011, the total objects tracked by the SSN sensors were more than 22,000. However, approximately 6000 of them had yet to be fully processed and entered into the catalog. This population had been dominated by fragmentation debris throughout history. Before the anti-satellite test (ASAT) conducted by China in 2007, the fragmentation debris were almost all explosion fragments. After the ASAT test and the collision between Iridium 33 and Cosmos 2251, the ratio of collision fragments to explosion fragments was about one-to-one. It is expected that accidental collision fragments will further dominate the environment in the future.

  17. Hybrid Reality Lab Capabilities - Video 2

    NASA Technical Reports Server (NTRS)

    Delgado, Francisco J.; Noyes, Matthew

    2016-01-01

    Our Hybrid Reality and Advanced Operations Lab is developing incredibly realistic and immersive systems that could be used to provide training, support engineering analysis, and augment data collection for various human performance metrics at NASA. To get a better understanding of what Hybrid Reality is, let's go through the two most commonly known types of immersive realities: Virtual Reality, and Augmented Reality. Virtual Reality creates immersive scenes that are completely made up of digital information. This technology has been used to train astronauts at NASA, used during teleoperation of remote assets (arms, rovers, robots, etc.) and other activities. One challenge with Virtual Reality is that if you are using it for real time-applications (like landing an airplane) then the information used to create the virtual scenes can be old (i.e. visualized long after physical objects moved in the scene) and not accurate enough to land the airplane safely. This is where Augmented Reality comes in. Augmented Reality takes real-time environment information (from a camera, or see through window, and places digitally created information into the scene so that it matches with the video/glass information). Augmented Reality enhances real environment information collected with a live sensor or viewport (e.g. camera, window, etc.) with the information-rich visualization provided by Virtual Reality. Hybrid Reality takes Augmented Reality even further, by creating a higher level of immersion where interactivity can take place. Hybrid Reality takes Virtual Reality objects and a trackable, physical representation of those objects, places them in the same coordinate system, and allows people to interact with both objects' representations (virtual and physical) simultaneously. After a short period of adjustment, the individuals begin to interact with all the objects in the scene as if they were real-life objects. The ability to physically touch and interact with digitally created objects that have the same shape, size, location to their physical object counterpart in virtual reality environment can be a game changer when it comes to training, planning, engineering analysis, science, entertainment, etc. Our Project is developing such capabilities for various types of environments. The video outlined with this abstract is a representation of an ISS Hybrid Reality experience. In the video you can see various Hybrid Reality elements that provide immersion beyond just standard Virtual Reality or Augmented Reality.

  18. Program For Generating Interactive Displays

    NASA Technical Reports Server (NTRS)

    Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl; hide

    1991-01-01

    Sun/Unix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. Plus viewed as productivity tool for application developers and application end users, who benefit from resultant consistent and well-designed user interface sheltering them from intricacies of computer. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC and PS/2 compute

  19. Another Program For Generating Interactive Graphics

    NASA Technical Reports Server (NTRS)

    Costenbader, Jay; Moleski, Walt; Szczur, Martha; Howell, David; Engelberg, Norm; Li, Tin P.; Misra, Dharitri; Miller, Philip; Neve, Leif; Wolf, Karl; hide

    1991-01-01

    VAX/Ultrix version of Transportable Applications Environment Plus (TAE+) computer program provides integrated, portable software environment for developing and running interactive window, text, and graphical-object-based application software systems. Enables programmer or nonprogrammer to construct easily custom software interface between user and application program and to move resulting interface program and its application program to different computers. When used throughout company for wide range of applications, makes both application program and computer seem transparent, with noticeable improvements in learning curve. Available in form suitable for following six different groups of computers: DEC VAX station and other VMS VAX computers, Macintosh II computers running AUX, Apollo Domain Series 3000, DEC VAX and reduced-instruction-set-computer workstations running Ultrix, Sun 3- and 4-series workstations running Sun OS and IBM RT/PC's and PS/2 computers running AIX, and HP 9000 S

  20. Hyper-X: Flight Validation of Hypersonic Airbreathing Technology

    NASA Technical Reports Server (NTRS)

    Rausch, Vincent L.; McClinton, Charles R.; Crawford, J. Larry

    1997-01-01

    This paper provides an overview of NASA's focused hypersonic technology program, i.e. the Hyper-X program. This program is designed to move hypersonic, air breathing vehicle technology from the laboratory environment to the flight environment, the last stage preceding prototype development. This paper presents some history leading to the flight test program, research objectives, approach, schedule and status. Substantial experimental data base and concept validation have been completed. The program is concentrating on Mach 7 vehicle development, verification and validation in preparation for wind tunnel testing in 1998 and flight testing in 1999. It is also concentrating on finalization of the Mach 5 and 10 vehicle designs. Detailed evaluation of the Mach 7 vehicle at the flight conditions is nearing completion, and will provide a data base for validation of design methods once flight test data are available.

  1. Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion

    PubMed Central

    Calabro, Finnegan J.; Vaina, Lucia Maria

    2016-01-01

    Background Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). Material/Methods 16 right handed healthy observers (ages 18–28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. Results Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. Conclusions These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion. PMID:27231114

  2. Scale Changes Provide an Alternative Cue For the Discrimination of Heading, But Not Object Motion.

    PubMed

    Calabro, Finnegan J; Vaina, Lucia Maria

    2016-05-27

    BACKGROUND Understanding the dynamics of our surrounding environments is a task usually attributed to the detection of motion based on changes in luminance across space. Yet a number of other cues, both dynamic and static, have been shown to provide useful information about how we are moving and how objects around us move. One such cue, based on changes in spatial frequency, or scale, over time has been shown to be useful in conveying motion in depth even in the absence of a coherent, motion-defined flow field (optic flow). MATERIAL AND METHODS 16 right handed healthy observers (ages 18-28) participated in the behavioral experiments described in this study. Using analytical behavioral methods we investigate the functional specificity of this cue by measuring the ability of observers to perform tasks of heading (direction of self-motion) and 3D trajectory discrimination on the basis of scale changes and optic flow. RESULTS Statistical analyses of performance on the test-experiments in comparison to the control experiments suggests that while scale changes may be involved in the detection of heading, they are not correctly integrated with translational motion and, thus, do not provide a correct discrimination of 3D object trajectories. CONCLUSIONS These results have the important implication for the type of visual guided navigation that can be done by an observer blind to optic flow. Scale change is an important alternative cue for self-motion.

  3. Common world model for unmanned systems: Phase 2

    NASA Astrophysics Data System (ADS)

    Dean, Robert M. S.; Oh, Jean; Vinokurov, Jerry

    2014-06-01

    The Robotics Collaborative Technology Alliance (RCTA) seeks to provide adaptive robot capabilities which move beyond traditional metric algorithms to include cognitive capabilities. Key to this effort is the Common World Model, which moves beyond the state-of-the-art by representing the world using semantic and symbolic as well as metric information. It joins these layers of information to define objects in the world. These objects may be reasoned upon jointly using traditional geometric, symbolic cognitive algorithms and new computational nodes formed by the combination of these disciplines to address Symbol Grounding and Uncertainty. The Common World Model must understand how these objects relate to each other. It includes the concept of Self-Information about the robot. By encoding current capability, component status, task execution state, and their histories we track information which enables the robot to reason and adapt its performance using Meta-Cognition and Machine Learning principles. The world model also includes models of how entities in the environment behave which enable prediction of future world states. To manage complexity, we have adopted a phased implementation approach. Phase 1, published in these proceedings in 2013 [1], presented the approach for linking metric with symbolic information and interfaces for traditional planners and cognitive reasoning. Here we discuss the design of "Phase 2" of this world model, which extends the Phase 1 design API, data structures, and reviews the use of the Common World Model as part of a semantic navigation use case.

  4. Try This: Moving Toys

    ERIC Educational Resources Information Center

    Preston, Christine

    2018-01-01

    If you think physics is only for older children, think again. Much of the playtime of young children is filled with exploring--and wondering about and informally investigating--the way objects, especially toys, move. How forces affect objects, including: change in position, motion, and shape are fundamental to the big ideas in physics. This…

  5. Let It Roll

    ERIC Educational Resources Information Center

    Trundle, Kathy Cabe; Smith, Mandy McCormick

    2011-01-01

    Some of children's earliest explorations focus on movement of their own bodies. Quickly, children learn to further explore movement by using objects like a ball or car. They recognize that a ball moves differently than a pushed block. As they grow, children enjoy their experiences with motion and movement, including making objects move, changing…

  6. An elementary research on wireless transmission of holographic 3D moving pictures

    NASA Astrophysics Data System (ADS)

    Takano, Kunihiko; Sato, Koki; Endo, Takaya; Asano, Hiroaki; Fukuzawa, Atsuo; Asai, Kikuo

    2009-05-01

    In this paper, a transmitting process of a sequence of holograms describing 3D moving objects over the communicating wireless-network system is presented. A sequence of holograms involves holograms is transformed into a bit stream data, and then it is transmitted over the wireless LAN and Bluetooth. It is shown that applying this technique, holographic data of 3D moving object is transmitted in high quality and a relatively good reconstruction of holographic images is performed.

  7. Perceived shifts of flashed stimuli by visible and invisible object motion.

    PubMed

    Watanabe, Katsumi; Sato, Takashi R; Shimojo, Shinsuke

    2003-01-01

    Perceived positions of flashed stimuli can be altered by motion signals in the visual field-position capture (Whitney and Cavanagh, 2000 Nature Neuroscience 3 954-959). We examined whether position capture of flashed stimuli depends on the spatial relationship between moving and flashed stimuli, and whether the phenomenal permanence of a moving object behind an occluding surface (tunnel effect; Michotte 1950 Acta Psychologica 7 293-322) can produce position capture. Observers saw two objects (circles) moving vertically in opposite directions, one in each visual hemifield. Two horizontal bars were simultaneously flashed at horizontally collinear positions with the fixation point at various timings. When the movement of the object was fully visible, the flashed bar appeared shifted in the motion direction of the circle. But this position-capture effect occurred only when the bar was presented ahead of or on the moving circle. Even when the motion trajectory was covered by an opaque surface and the bar was flashed after complete occlusion of the circle, the position-capture effect was still observed, though the positional asymmetry was less clear. These results show that movements of both visible and 'hidden' objects can modulate the perception of positions of flashed stimuli and suggest that a high-level representation of 'objects in motion' plays an important role in the position-capture effect.

  8. Coupling of Head and Body Movement with Motion of the Audible Environment

    ERIC Educational Resources Information Center

    Stoffregen, Thomas A.; Villard, Sebastien; Kim, ChungGon; Ito, Kiyohide; Bardy, Benoit G.

    2009-01-01

    The authors asked whether standing posture could be controlled relative to audible oscillation of the environment. Blindfolded sighted adults were exposed to acoustic flow in a moving room, and were asked to move so as to maintain a constant distance between their head and the room. Acoustic flow had direct (source) and indirect (reflected)…

  9. Aurally aided visual search performance in a dynamic environment

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

    2008-04-01

    Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

  10. Exploiting Satellite Focal Plane Geometry for Automatic Extraction of Traffic Flow from Single Optical Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Krauß, T.

    2014-11-01

    The focal plane assembly of most pushbroom scanner satellites is built up in a way that different multispectral or multispectral and panchromatic bands are not all acquired exactly at the same time. This effect is due to offsets of some millimeters of the CCD-lines in the focal plane. Exploiting this special configuration allows the detection of objects moving during this small time span. In this paper we present a method for automatic detection and extraction of moving objects - mainly traffic - from single very high resolution optical satellite imagery of different sensors. The sensors investigated are WorldView-2, RapidEye, Pléiades and also the new SkyBox satellites. Different sensors require different approaches for detecting moving objects. Since the objects are mapped on different positions only in different spectral bands also the change of spectral properties have to be taken into account. In case the main distance in the focal plane is between the multispectral and the panchromatic CCD-line like for Pléiades an approach for weighted integration to receive mostly identical images is investigated. Other approaches for RapidEye and WorldView-2 are also shown. From these intermediate bands difference images are calculated and a method for detecting the moving objects from these difference images is proposed. Based on these presented methods images from different sensors are processed and the results are assessed for detection quality - how many moving objects can be detected, how many are missed - and accuracy - how accurate is the derived speed and size of the objects. Finally the results are discussed and an outlook for possible improvements towards operational processing is presented.

  11. Feasibility of a Sensory-Adapted Dental Environment for Children With Autism

    PubMed Central

    Stein Duker, Leah I.; Williams, Marian E.; Lane, Christianne Joy; Dawson, Michael E.; Borreson, Ann E.; Polido, José C.

    2015-01-01

    OBJECTIVE. To provide an example of an occupational therapy feasibility study and evaluate the implementation of a randomized controlled pilot and feasibility trial examining the impact of a sensory-adapted dental environment (SADE) to enhance oral care for children with autism spectrum disorder (ASD). METHOD. Twenty-two children with ASD and 22 typically developing children, ages 6–12 yr, attended a dental clinic in an urban hospital. Participants completed two dental cleanings, 3–4 mo apart, one in a regular environment and one in a SADE. Feasibility outcome measures were recruitment, retention, accrual, dropout, and protocol adherence. Intervention outcome measures were physiological stress, behavioral distress, pain, and cost. RESULTS. We successfully recruited and retained participants. Parents expressed satisfaction with research study participation. Dentists stated that the intervention could be incorporated in normal practice. Intervention outcome measures favored the SADE condition. CONCLUSION. Preliminary positive benefit of SADE in children with ASD warrants moving forward with a large-scale clinical trial. PMID:25871593

  12. Using the Global Environment Facility for developing Integrated Conservation and Development (ICAD) models -- Papua New Guinea`s Biodiversity Conservation Management Programme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kula, G.; Jefferies, B.

    1995-03-01

    The unprecedented level of support that has been pledged to strengthen Government of Papua New Guinea (GoPNG) biodiversity conservation initiatives has re-identified an important fact that technical and infrastructure support must be complemented by programs that provide realistic opportunities for developing national capacity. Indications are that the next five years will present a range of challenging opportunities for the department to move from the intensive period of planning, which has been the focus of attention during the first phase of the National Forestry and Conservation Action Programme (NFCAP), into a sustained period of policy and project application. This paper examinesmore » processes under which strengthening programs contribute to national development objectives and complement accomplishment of the Department of Environment and Conservation Strategic Plan. An overview of the Global Environment Facility-Integrated Conservation and Development (ICAD) Project and coordination effort that are being made for biodiversity conservation projects in Papua New Guinea, are addressed.« less

  13. Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2016-10-01

    Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.

  14. Wind tunnel tests of the dynamic characteristics of the fluidic rudder

    NASA Technical Reports Server (NTRS)

    Belsterling, C. A.

    1976-01-01

    The fourth phase is given of a continuing program to develop the means to stabilize and control aircraft without moving parts or a separate source of power. Previous phases have demonstrated the feasibility of (1) generating adequate control forces on a standard airfoil, (2) controlling those forces with a fluidic amplifier and (3) cascading non-vented fluidic amplifiers operating on ram air supply pressure. The foremost objectives of the fourth phase covered under Part I of this report were to demonstrate a complete force-control system in a wind tunnel environment and to measure its static and dynamic control characteristics. Secondary objectives, covered under Part II, were to evaluate alternate configurations for lift control. The results demonstrate an overall response time of 150 msec, confirming this technology as a viable means for implementing low-cost reliable flight control systems.

  15. Alternatives to the Moving Average

    Treesearch

    Paul C. van Deusen

    2001-01-01

    There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...

  16. A design of optical measurement laboratory for space-based illumination condition emulation

    NASA Astrophysics Data System (ADS)

    Xu, Rong; Zhao, Fei; Yang, Xin

    2015-10-01

    Space Objects Identification(SOI) and related technology have aroused wide attention from spacefaring nations due to the increasingly severe space environment. Multiple ground-based assets have been employed to acquire statistical survey data, detect faint debris, acquire photometric and spectroscopic data. Great efforts have been made to characterize different space objects using the statistical data acquired by telescopes. Furthermore, detailed laboratory data are needed to optimize the characterization of orbital debris and satellites via material composition and potential rotation axes, which calls for a high-precision and flexible optical measurement system. A typical method of taking optical measurements of a space object(or model) is to move light source and sensors through every possible orientation around it and keep the target still. However, moving equipments to accurate orientations in the air is difficult, especially for those large precise instruments sensitive to vibrations. Here, a rotation structure of "3+1" axes, with a three-axis turntable manipulating attitudes of the target and the sensor revolving around a single axis, is utilized to emulate every possible illumination condition in space, which can also avoid the inconvenience of moving large aparatus. Firstly, the source-target-sensor orientation of a real satellite was analyzed with vectors and coordinate systems built to illustrate their spatial relationship. By bending the Reference Coordinate Frame to the Phase Angle plane, the sensor only need to revolve around a single axis while the other three degrees of freedom(DOF) are associated with the Euler's angles of the satellite. Then according to practical engineering requirements, an integrated rotation system of four-axis structure is brought forward. Schemetic diagrams of the three-axis turntable and other equipments show an overview of the future laboratory layout. Finally, proposals on evironment arrangements, light source precautions and sensor selections are provided. Comparing to current methods, this design shows better effects on device simplication, automatic control and high-precision measurement.

  17. Optimal Path Determination for Flying Vehicle to Search an Object

    NASA Astrophysics Data System (ADS)

    Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.

    2018-01-01

    In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.

  18. Exploring Dance Movement Data Using Sequence Alignment Methods

    PubMed Central

    Chavoshi, Seyed Hossein; De Baets, Bernard; Neutens, Tijs; De Tré, Guy; Van de Weghe, Nico

    2015-01-01

    Despite the abundance of research on knowledge discovery from moving object databases, only a limited number of studies have examined the interaction between moving point objects in space over time. This paper describes a novel approach for measuring similarity in the interaction between moving objects. The proposed approach consists of three steps. First, we transform movement data into sequences of successive qualitative relations based on the Qualitative Trajectory Calculus (QTC). Second, sequence alignment methods are applied to measure the similarity between movement sequences. Finally, movement sequences are grouped based on similarity by means of an agglomerative hierarchical clustering method. The applicability of this approach is tested using movement data from samba and tango dancers. PMID:26181435

  19. UAS stealth: target pursuit at constant distance using a bio-inspired motion camouflage guidance law.

    PubMed

    Strydom, Reuben; Srinivasan, Mandyam V

    2017-09-21

    The aim of this study is to derive a guidance law by which an unmanned aerial system(s) (UAS) can pursue a moving target at a constant distance, while concealing its own motion. We derive a closed-form solution for the trajectory of the UAS by imposing two key constraints: (1) the shadower moves in such a way as to be perceived as a stationary object by the shadowee, and (2) the distance between the shadower and shadowee is kept constant. Additionally, the theory presented in this paper considers constraints on the maximum achievable speed and acceleration of the shadower. Our theory is tested through Matlab simulations, which validate the camouflage strategy for both 2D and 3D conditions. Furthermore, experiments using a realistic vision-based implementation are conducted in a virtual environment, where the results demonstrate that even with noisy state information it is possible to remain well camouflaged using the constant distance motion camouflage technique.

  20. Evidence against a speed limit in multiple-object tracking.

    PubMed

    Franconeri, S L; Lin, J Y; Pylyshyn, Z W; Fisher, B; Enns, J T

    2008-08-01

    Everyday tasks often require us to keep track of multiple objects in dynamic scenes. Past studies show that tracking becomes more difficult as objects move faster. In the present study, we show that this trade-off may not be due to increased speed itself but may, instead, be due to the increased crowding that usually accompanies increases in speed. Here, we isolate changes in speed from variations in crowding, by projecting a tracking display either onto a small area at the center of a hemispheric projection dome or onto the entire dome. Use of the larger display increased retinal image size and object speed by a factor of 4 but did not increase interobject crowding. Results showed that tracking accuracy was equally good in the large-display condition, even when the objects traveled far into the visual periphery. Accuracy was also not reduced when we tested object speeds that limited performance in the small-display condition. These results, along with a reinterpretation of past studies, suggest that we might be able to track multiple moving objects as fast as we can a single moving object, once the effect of object crowding is eliminated.

  1. Individual differences in the perception of biological motion and fragmented figures are not correlated

    PubMed Central

    Jung, Eunice L.; Zadbood, Asieh; Lee, Sang-Hun; Tomarken, Andrew J.; Blake, Randolph

    2013-01-01

    We live in a cluttered, dynamic visual environment that poses a challenge for the visual system: for objects, including those that move about, to be perceived, information specifying those objects must be integrated over space and over time. Does a single, omnibus mechanism perform this grouping operation, or does grouping depend on separate processes specialized for different feature aspects of the object? To address this question, we tested a large group of healthy young adults on their abilities to perceive static fragmented figures embedded in noise and to perceive dynamic point-light biological motion figures embedded in dynamic noise. There were indeed substantial individual differences in performance on both tasks, but none of the statistical tests we applied to this data set uncovered a significant correlation between those performance measures. These results suggest that the two tasks, despite their superficial similarity, require different segmentation and grouping processes that are largely unrelated to one another. Whether those processes are embodied in distinct neural mechanisms remains an open question. PMID:24198799

  2. Individual differences in the perception of biological motion and fragmented figures are not correlated.

    PubMed

    Jung, Eunice L; Zadbood, Asieh; Lee, Sang-Hun; Tomarken, Andrew J; Blake, Randolph

    2013-01-01

    WE LIVE IN A CLUTTERED, DYNAMIC VISUAL ENVIRONMENT THAT POSES A CHALLENGE FOR THE VISUAL SYSTEM: for objects, including those that move about, to be perceived, information specifying those objects must be integrated over space and over time. Does a single, omnibus mechanism perform this grouping operation, or does grouping depend on separate processes specialized for different feature aspects of the object? To address this question, we tested a large group of healthy young adults on their abilities to perceive static fragmented figures embedded in noise and to perceive dynamic point-light biological motion figures embedded in dynamic noise. There were indeed substantial individual differences in performance on both tasks, but none of the statistical tests we applied to this data set uncovered a significant correlation between those performance measures. These results suggest that the two tasks, despite their superficial similarity, require different segmentation and grouping processes that are largely unrelated to one another. Whether those processes are embodied in distinct neural mechanisms remains an open question.

  3. Monitoring Aircraft Motion at Airports by LIDAR

    NASA Astrophysics Data System (ADS)

    Toth, C.; Jozkow, G.; Koppanyi, Z.; Young, S.; Grejner-Brzezinska, D.

    2016-06-01

    Improving sensor performance, combined with better affordability, provides better object space observability, resulting in new applications. Remote sensing systems are primarily concerned with acquiring data of the static components of our environment, such as the topographic surface of the earth, transportation infrastructure, city models, etc. Observing the dynamic component of the object space is still rather rare in the geospatial application field; vehicle extraction and traffic flow monitoring are a few examples of using remote sensing to detect and model moving objects. Deploying a network of inexpensive LiDAR sensors along taxiways and runways can provide both geometrically and temporally rich geospatial data that aircraft body can be extracted from the point cloud, and then, based on consecutive point clouds motion parameters can be estimated. Acquiring accurate aircraft trajectory data is essential to improve aviation safety at airports. This paper reports about the initial experiences obtained by using a network of four Velodyne VLP- 16 sensors to acquire data along a runway segment.

  4. Simultaneous acquisition of trajectory and fluorescence lifetime of moving single particles

    NASA Astrophysics Data System (ADS)

    Wu, Qianqian; Qi, Jing; Lin, Danying; Yan, Wei; Hu, Rui; Peng, Xiao; Qu, Junle

    2017-02-01

    Fluorescence lifetime imaging (FLIM) has been a powerful tool in life science because it can reveal the interactions of an excited fluorescent molecule and its environment. The combination with two-photon excitation (TPE) and timecorrelated single photon counting (TCSPC) provides it the ability of optical sectioning, high time resolution and detection efficiency. In previous work, we have introduced a two-dimensional acousto-optic deflector (AOD) into TCSPC-based FLIM to achieve fast and flexible FLIM. In this work, we combined the AOD-FLIM system with a single particle tracking (SPT) setup and algorithm and developed an SPT-FLIM system. Using the system, we acquired the trajectory and fluorescence lifetime of a moving particle simultaneously and reconstructed a life-time-marked pseudocolored trajectory, which might reflect dynamic interaction between the moving particle and its local environment along its motion trail. The results indicated the potential of the technique for studying the interaction between specific moving biological macromolecules and the ambient micro-environment in live cells.

  5. The Straight-Down Belief.

    ERIC Educational Resources Information Center

    McCloskey, Michael; And Others

    Through everyday experience people acquire knowledge about how moving objects behave. For example, if a rock is thrown up into the air, it will fall back to earth. Research has shown that people's ideas about why moving objects behave as they do are often quite inconsistent with the principles of classical mechanics. In fact, many people hold a…

  6. Origins of Newton's First Law

    ERIC Educational Resources Information Center

    Hecht, Eugene

    2015-01-01

    Anyone who has taught introductory physics should know that roughly a third of the students initially believe that any object at rest will remain at rest, whereas any moving body not propelled by applied forces will promptly come to rest. Likewise, about half of those uninitiated students believe that any object moving at a constant speed must be…

  7. Infants' use of category knowledge and object attributes when segregating objects at 8.5 months of age.

    PubMed

    Needham, Amy; Cantlon, Jessica F; Ormsbee Holley, Susan M

    2006-12-01

    The current research investigates infants' perception of a novel object from a category that is familiar to young infants: key rings. We ask whether experiences obtained outside the lab would allow young infants to parse the visible portions of a partly occluded key ring display into one single unit, presumably as a result of having categorized it as a key ring. This categorization was marked by infants' perception of the keys and ring as a single unit that should move together, despite their attribute differences. We showed infants a novel key ring display in which the keys and ring moved together as one rigid unit (Move-together event) or the ring moved but the keys remained stationary throughout the event (Move-apart event). Our results showed that 8.5-month-old infants perceived the keys and ring as connected despite their attribute differences, and that their perception of object unity was eliminated as the distinctive attributes of the key ring were removed. When all of the distinctive attributes of the key ring were removed, the 8.5-month-old infants perceived the display as two separate units, which is how younger infants (7-month-old) perceived the key ring display with all its distinctive attributes unaltered. These results suggest that on the basis of extensive experience with an object category, infants come to identify novel members of that category and expect them to possess the attributes typical of that category.

  8. Velocity measurement by vibro-acoustic Doppler.

    PubMed

    Nabavizadeh, Alireza; Urban, Matthew W; Kinnick, Randall R; Fatemi, Mostafa

    2012-04-01

    We describe the theoretical principles of a new Doppler method, which uses the acoustic response of a moving object to a highly localized dynamic radiation force of the ultrasound field to calculate the velocity of the moving object according to Doppler frequency shift. This method, named vibro-acoustic Doppler (VAD), employs two ultrasound beams separated by a slight frequency difference, Δf, transmitting in an X-focal configuration. Both ultrasound beams experience a frequency shift because of the moving objects and their interaction at the joint focal zone produces an acoustic frequency shift occurring around the low-frequency (Δf) acoustic emission signal. The acoustic emission field resulting from the vibration of the moving object is detected and used to calculate its velocity. We report the formula that describes the relation between Doppler frequency shift of the emitted acoustic field and the velocity of the moving object. To verify the theory, we used a string phantom. We also tested our method by measuring fluid velocity in a tube. The results show that the error calculated for both string and fluid velocities is less than 9.1%. Our theory shows that in the worst case, the error is 0.54% for a 25° angle variation for the VAD method compared with an error of -82.6% for a 25° angle variation for a conventional continuous wave Doppler method. An advantage of this method is that, unlike conventional Doppler, it is not sensitive to angles between the ultrasound beams and direction of motion.

  9. Vigilance on the move: video game-based measurement of sustained attention.

    PubMed

    Szalma, J L; Schmidt, T N; Teo, G W L; Hancock, P A

    2014-01-01

    Vigilance represents the capacity to sustain attention to any environmental source of information over prolonged periods on watch. Most stimuli used in vigilance research over the previous six decades have been relatively simple and often purport to represent important aspects of detection and discrimination tasks in real-world settings. Such displays are most frequently composed of single stimulus presentations in discrete trials against a uniform, often uncluttered background. The present experiment establishes a dynamic, first-person perspective vigilance task in motion using a video-game environment. 'Vigilance on the move' is thus a new paradigm for the study of sustained attention. We conclude that the stress of vigilance extends to the new paradigm, but whether the performance decrement emerges depends upon specific task parameters. The development of the task, the issues to be resolved and the pattern of performance, perceived workload and stress associated with performing such dynamic vigilance are reported. The present experiment establishes a dynamic, first-person perspective movement-based vigilance task using a video-game environment. 'Vigilance on the move' is thus a new paradigm for the evaluation of sustained attention in operational environments in which individuals move as they monitor their environment. Issues addressed in task development are described.

  10. Detection and tracking of human targets in indoor and urban environments using through-the-wall radar sensors

    NASA Astrophysics Data System (ADS)

    Radzicki, Vincent R.; Boutte, David; Taylor, Paul; Lee, Hua

    2017-05-01

    Radar based detection of human targets behind walls or in dense urban environments is an important technical challenge with many practical applications in security, defense, and disaster recovery. Radar reflections from a human can be orders of magnitude weaker than those from objects encountered in urban settings such as walls, cars, or possibly rubble after a disaster. Furthermore, these objects can act as secondary reflectors and produce multipath returns from a person. To mitigate these issues, processing of radar return data needs to be optimized for recognizing human motion features such as walking, running, or breathing. This paper presents a theoretical analysis on the modulation effects human motion has on the radar waveform and how high levels of multipath can distort these motion effects. From this analysis, an algorithm is designed and optimized for tracking human motion in heavily clutter environments. The tracking results will be used as the fundamental detection/classification tool to discriminate human targets from others by identifying human motion traits such as predictable walking patterns and periodicity in breathing rates. The theoretical formulations will be tested against simulation and measured data collected using a low power, portable see-through-the-wall radar system that could be practically deployed in real-world scenarios. Lastly, the performance of the algorithm is evaluated in a series of experiments where both a single person and multiple people are moving in an indoor, cluttered environment.

  11. 3D Graphics Through the Internet: A "Shoot-Out"

    NASA Technical Reports Server (NTRS)

    Watson, Val; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    3D graphics through the Internet needs to move beyond the current lowest common denominator of pre-computed movies, which consume bandwidth and are non-interactive. Panelists will demonstrate and compare 3D graphical tools for accessing, analyzing, and collaborating on information through the Internet and World-wide web. The "shoot-out" will illustrate which tools are likely to be the best for the various types of information, including dynamic scientific data, 3-D objects, and virtual environments. The goal of the panel is to encourage more effective use of the Internet by encouraging suppliers and users of information to adopt the next generation of graphical tools.

  12. The emotional effects of violations of causality, or How to make a square amusing

    PubMed Central

    Bressanelli, Daniela; Parovel, Giulia

    2012-01-01

    In Michotte's launching paradigm a square moves up to and makes contact with another square, which then moves off more slowly. In the triggering effect, the second square moves much faster than the first, eliciting an amusing impression. We generated 13 experimental displays in which there was always incongruity between cause and effect. We hypothesized that the comic impression would be stronger when objects are perceived as living agents and weaker when objects are perceived as mechanically non-animated. General findings support our hypothesis. PMID:23145274

  13. VizieR Online Data Catalog: Catalog of Suspected Nearby Young Stars (Riedel+, 2017)

    NASA Astrophysics Data System (ADS)

    Riedel, A. R.; Blunt, S. C.; Lambrides, E. L.; Rice, E. L.; Cruz, K. L.; Faherty, J. K.

    2018-04-01

    LocAting Constituent mEmbers In Nearby Groups (LACEwING) is a frequentist observation space kinematic moving group identification code. Using the spatial and kinematic information available about a target object (α, δ, Dist, μα, μδ, and γ), it determines the probability that the object is a member of each of the known nearby young moving groups (NYMGs). As with other moving group identification codes, LACEwING is capable of estimating memberships for stars with incomplete kinematic and spatial information. (2 data files).

  14. An Automatic Technique for Finding Faint Moving Objects in Wide Field CCD Images

    NASA Astrophysics Data System (ADS)

    Hainaut, O. R.; Meech, K. J.

    1996-09-01

    The traditional method used to find moving objects in astronomical images is to blink pairs or series of frames after registering them to align the background objects. While this technique is extremely efficient in terms of the low signal-to-noise ratio that the human sight can detect, it proved to be extremely time-, brain- and eyesight-consuming. The wide-field images provided by the large CCD mosaic recently built at IfA cover a field of view of 20 to 30' over 8192(2) pixels. Blinking such images is an enormous task, comparable to that of blinking large photographic plates. However, as the data are available digitally (each image occupying 260Mb of disk space), we are developing a set of computer codes to perform the moving object identification in sets of frames. This poster will describe the techniques we use in order to reach a detection efficiency as good as that of a human blinker; the main steps are to find all the objects in each frame (for which we rely on ``S-Extractor'' (Bertin & Arnouts (1996), A&ASS 117, 393), then identify all the background objects, and finally to search the non-background objects for sources moving in a coherent fashion. We will also describe the results of this method applied to actual data from the 8k CCD mosaic. {This work is being supported, in part, by NSF grant AST 92-21318.}

  15. Dynamic Binding of Identity and Location Information: A Serial Model of Multiple Identity Tracking

    ERIC Educational Resources Information Center

    Oksama, Lauri; Hyona, Jukka

    2008-01-01

    Tracking of multiple moving objects is commonly assumed to be carried out by a fixed-capacity parallel mechanism. The present study proposes a serial model (MOMIT) to explain performance accuracy in the maintenance of multiple moving objects with distinct identities. A serial refresh mechanism is postulated, which makes recourse to continuous…

  16. Acoustical-Levitation Chamber for Metallurgy

    NASA Technical Reports Server (NTRS)

    Barmatz, M. B.; Trinh, E.; Wang, T. G.; Elleman, D. D.; Jacobi, N.

    1983-01-01

    Sample moved to different positions for heating and quenching. Acoustical levitation chamber selectively excited in fundamental and second-harmonic longitudinal modes to hold sample at one of three stable postions: A, B, or C. Levitated object quickly moved from one of these positions to another by changing modes. Object rapidly quenched at A or C after heating in furnace region at B.

  17. Another Way of Tracking Moving Objects Using Short Video Clips

    ERIC Educational Resources Information Center

    Vera, Francisco; Romanque, Cristian

    2009-01-01

    Physics teachers have long employed video clips to study moving objects in their classrooms and instructional labs. A number of approaches exist, both free and commercial, for tracking the coordinates of a point using video. The main characteristics of the method described in this paper are: it is simple to use; coordinates can be tracked using…

  18. Calibration of asynchronous smart phone cameras from moving objects

    NASA Astrophysics Data System (ADS)

    Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel

    2015-04-01

    Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.

  19. Occupational injuries and sick leaves in household moving works.

    PubMed

    Hwan Park, Myoung; Jeong, Byung Yong

    2017-09-01

    This study is concerned with household moving works and the characteristics of occupational injuries and sick leaves in each step of the moving process. Accident data for 392 occupational accidents were categorized by the moving processes in which the accidents occurred, and possible incidents and sick leaves were assessed for each moving process and hazard factor. Accidents occurring during specific moving processes showed different characteristics depending on the type of accident and agency of accidents. The most critical form in the level of risk management was falls from a height in the 'lifting by ladder truck' process. Incidents ranked as a 'High' level of risk management were in the forms of slips, being struck by objects and musculoskeletal disorders in the 'manual materials handling' process. Also, falls in 'loading/unloading', being struck by objects during 'lifting by ladder truck' and driving accidents in the process of 'transport' were ranked 'High'. The findings of this study can be used to develop more effective accident prevention policy reflecting different circumstances and conditions to reduce occupational accidents in household moving works.

  20. Measuring attention using flash-lag effect.

    PubMed

    Shioiri, Satoshi; Yamamoto, Ken; Oshida, Hiroki; Matsubara, Kazuya; Yaguchi, Hirohisa

    2010-08-13

    We investigated the effect of attention on the flash-lag effect (FLE) in order to determine whether the FLE can be used to estimate the effect of visual attention. The FLE is the effect that a flash aligned with a moving object is perceived to lag the moving object, and several studies have shown that attention reduces its magnitude. We measured the FLE as a function of the number or speed of moving objects. The results showed that the effect of cueing, which we attributed the effect of attention, on the FLE increased monotonically with the number or the speed of the objects. This suggests that the amount of attention can be estimated by measuring the FLE, assuming that more amount of attention is required for a larger number or faster speed of objects to attend. On the basis of this presumption, we attempted to measure the spatial spread of visual attention by FLE measurements. The estimated spatial spreads were similar to those estimated by other experimental methods.

  1. Heterodyne laser Doppler distance sensor with phase coding measuring stationary as well as laterally and axially moving objects

    NASA Astrophysics Data System (ADS)

    Pfister, T.; Günther, P.; Nöthen, M.; Czarske, J.

    2010-02-01

    Both in production engineering and process control, multidirectional displacements, deformations and vibrations of moving or rotating components have to be measured dynamically, contactlessly and with high precision. Optical sensors would be predestined for this task, but their measurement rate is often fundamentally limited. Furthermore, almost all conventional sensors measure only one measurand, i.e. either out-of-plane or in-plane distance or velocity. To solve this problem, we present a novel phase coded heterodyne laser Doppler distance sensor (PH-LDDS), which is able to determine out-of-plane (axial) position and in-plane (lateral) velocity of rough solid-state objects simultaneously and independently with a single sensor. Due to the applied heterodyne technique, stationary or purely axially moving objects can also be measured. In addition, it is shown theoretically as well as experimentally that this sensor offers concurrently high temporal resolution and high position resolution since its position uncertainty is in principle independent of the lateral object velocity in contrast to conventional distance sensors. This is a unique feature of the PH-LDDS enabling precise and dynamic position and shape measurements also of fast moving objects. With an optimized sensor setup, an average position resolution of 240 nm was obtained.

  2. Electromagnetic Environment Due To A Pulsed Moving Conductor

    DTIC Science & Technology

    1999-06-01

    ELECTROMAGNETIC ENVIRONMENT DUE TO A PULSED MOVING CONDUCTOR Ira Kohlberg Kohl berg Associates, Inc., 11308 South Shore Road, Reston, VA 20190...PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Kohlberg Associates, Inc., 11308 South Shore Road, Reston, VA 20190 8. PERFORMING ORGANIZATION REPORT...in this analysis but can readily be computed using the techniques developed in this study. REFERENCES I. I. Kohlberg , A. Zielinski, and C. Le

  3. Binaural Processing of Multiple Sound Sources

    DTIC Science & Technology

    2016-08-18

    Sound Source Localization Identification, and Sound Source Localization When Listeners Move. The CI research was also supported by an NIH grant...8217Cochlear Implant Performance in Realistic Listening Environments,’ Dr. Michael Dorman, Principal Investigator, Dr. William Yost unpaid advisor. The other... Listeners Move. The CI research was also supported by an NIH grant (“Cochlear Implant Performance in Realistic Listening Environments,” Dr. Michael Dorman

  4. Virtual reality: towards a novel treatment environment for ankylosing spondylitis.

    PubMed

    Li, Shijuan; Kay, Stephen; Hardicker, Nicholas R

    2007-01-01

    The objective of this paper is to outline the project that eventually seeks to visualize clinical knowledge found within the record; the immediate task being to create a model that can be deployed for therapeutic purposes. How therapies for a certain type of chronically ill patient can benefit from Virtual Reality (VR) tools is investigated. Ankylosing Spondylitis (AS) is selected as a test condition. VR is expected to provide a novel treatment environment for AS sufferers, in which they can relax, manage their pain and take part in the routine exercise more effectively and efficiently by using the VR tools. An integral part of this model's construction will be to elicit evaluative detail from the literature and the patients' perspective. The purpose is to understand the inevitable challenges facing this proposed intervention if the design prototype is to successfully move from the research domain and become an integral part of established therapeutic practice.

  5. 2016 CSSE L3 Milestone: Deliver In Situ to XTD End Users

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patchett, John M.; Nouanesengsy, Boonthanome; Fasel, Patricia Kroll

    This report summarizes the activities in FY16 toward satisfying the CSSE 2016 L3 milestone to deliver in situ to XTD end users of EAP codes. The Milestone was accomplished with ongoing work to ensure the capability is maintained and developed. Two XTD end users used the in situ capability in Rage. A production ParaView capability was created in the HPC and Desktop environment. Two new capabilities were added to ParaView in support of an EAP in situ workflow. We also worked with various support groups at the lab to deploy a production ParaView in the LANL environment for both desktopmore » and HPC systems. . In addition, for this milestone, we moved two VTK based filters from research objects into the production ParaView code to support a variety of standard visualization pipelines for our EAP codes.« less

  6. Sensor modeling and demonstration of a multi-object spectrometer for performance-driven sensing

    NASA Astrophysics Data System (ADS)

    Kerekes, John P.; Presnar, Michael D.; Fourspring, Kenneth D.; Ninkov, Zoran; Pogorzala, David R.; Raisanen, Alan D.; Rice, Andrew C.; Vasquez, Juan R.; Patel, Jeffrey P.; MacIntyre, Robert T.; Brown, Scott D.

    2009-05-01

    A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a highfidelity, hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track moving vehicle targets.

  7. Ubiquitous computing in the military environment

    NASA Astrophysics Data System (ADS)

    Scholtz, Jean

    2001-08-01

    Increasingly people work and live on the move. To support this mobile lifestyle, especially as our work becomes more intensely information-based, companies are producing various portable and embedded information devices. The late Mark Weiser coined the term, 'ubiquitous computing' to describe an environment where computers have disappeared and are integrated into physical objects. Much industry research today is concerned with ubiquitous computing in the work and home environments. A ubiquitous computing environment would facilitate mobility by allowing information users to easily access and use information anytime, anywhere. As war fighters are inherently mobile, the question is what effect a ubiquitous computing environment would have on current military operations and doctrine. And, if ubiquitous computing is viewed as beneficial for the military, what research would be necessary to achieve a military ubiquitous computing environment? What is a vision for the use of mobile information access in a battle space? Are there different requirements for civilian and military users of this technology? What are those differences? Are there opportunities for research that will support both worlds? What type of research has been supported by the military and what areas need to be investigated? Although we don't yet have all the answers to these questions, this paper discusses the issues and presents the work we are doing to address these issues.

  8. Influence of moving visual environment on sit-to-stand kinematics in children and adults.

    PubMed

    Slaboda, Jill C; Barton, Joseph E; Keshner, Emily A

    2009-08-01

    The effect of visual field motion on the sit-to-stand kinematics of adults and children was investigated. Children (8 to12 years of age) and adults (21 to 49 years of age) were seated in a virtual environment that rotated in the pitch and roll directions. Participants stood up either (1) concurrent with onset of visual motion or (2) after an immersion period in the moving visual environment, and (3) without visual input. Angular velocities of the head with respect to the trunk, and trunk with respect to the environment, w ere calculated as was head andtrunk center of mass. Both adults and children reduced head and trunk angular velocity after immersion in the moving visual environment. Unlike adults, children demonstrated significant differences in displacement of the head center of mass during the immersion and concurrent trials when compared to trials without visual input. Results suggest a time-dependent effect of vision on sit-to-stand kinematics in adults, whereas children are influenced by the immediate presence or absence of vision.

  9. How soft is that pillow? The perceptual localization of the hand and the haptic assessment of contact rigidity.

    PubMed

    Pressman, Assaf; Karniel, Amir; Mussa-Ivaldi, Ferdinando A

    2011-04-27

    A new haptic illusion is described, in which the location of the mobile object affects the perception of its rigidity. There is theoretical and experimental support for the notion that limb position sense results from the brain combining ongoing sensory information with expectations arising from prior experience. How does this probabilistic state information affect one's tactile perception of the environment mechanics? In a simple estimation process, human subjects were asked to report the relative rigidity of two simulated virtual objects. One of the objects remained fixed in space and had various coefficients of stiffness. The other virtual object had constant stiffness but moved with respect to the subjects. Earlier work suggested that the perception of an object's rigidity is consistent with a process of regression between the contact force and the perceived amount of penetration inside the object's boundary. The amount of penetration perceived by the subject was affected by varying the position of the object. This, in turn, had a predictable effect on the perceived rigidity of the contact. Subjects' reports on the relative rigidity of the object are best accounted for by a probabilistic model in which the perceived boundary of the object is estimated based on its current location and on past observations. Therefore, the perception of contact rigidity is accounted for by a stochastic process of state estimation underlying proprioceptive localization of the hand.

  10. A Hexapod Robot to Demonstrate Mesh Walking in a Microgravity Environment

    NASA Technical Reports Server (NTRS)

    Foor, David C.

    2005-01-01

    The JPL Micro-Robot Explorer (MRE) Spiderbot is a robot that takes advantage of its small size to perform precision tasks suitable for space applications. The Spiderbot is a legged robot that can traverse harsh terrain otherwise inaccessible to wheeled robots. A team of Spiderbots can network and can exhibit collaborative efforts to SUCCeSSfUlly complete a set of tasks. The Spiderbot is designed and developed to demonstrate hexapods that can walk on flat surfaces, crawl on meshes, and assemble simple structures. The robot has six legs consisting of two spring-compliant joints and a gripping actuator. A hard-coded set of gaits allows the robot to move smoothly in a zero-gravity environment along the mesh. The primary objective of this project is to create a Spiderbot that traverses a flexible, deployable mesh, for use in space repair. Verification of this task will take place aboard a zero-gravity test flight. The secondary objective of this project is to adapt feedback from the joints to allow the robot to test each arm for a successful grip of the mesh. The end result of this research lends itself to a fault-tolerant robot suitable for a wide variety of space applications.

  11. What makes a movement a gesture?

    PubMed

    Novack, Miriam A; Wakefield, Elizabeth M; Goldin-Meadow, Susan

    2016-01-01

    Theories of how adults interpret the actions of others have focused on the goals and intentions of actors engaged in object-directed actions. Recent research has challenged this assumption, and shown that movements are often interpreted as being for their own sake (Schachner & Carey, 2013). Here we postulate a third interpretation of movement-movement that represents action, but does not literally act on objects in the world. These movements are gestures. In this paper, we describe a framework for predicting when movements are likely to be seen as representations. In Study 1, adults described one of three scenes: (1) an actor moving objects, (2) an actor moving her hands in the presence of objects (but not touching them) or (3) an actor moving her hands in the absence of objects. Participants systematically described the movements as depicting an object-directed action when the actor moved objects, and favored describing the movements as depicting movement for its own sake when the actor produced the same movements in the absence of objects. However, participants favored describing the movements as representations when the actor produced the movements near, but not on, the objects. Study 2 explored two additional features-the form of an actor's hands and the presence of speech-like sounds-to test the effect of context on observers' classification of movement as representational. When movements are seen as representations, they have the power to influence communication, learning, and cognition in ways that movement for its own sake does not. By incorporating representational gesture into our framework for movement analysis, we take an important step towards developing a more cohesive understanding of action-interpretation. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. The CAVE (TM) automatic virtual environment: Characteristics and applications

    NASA Technical Reports Server (NTRS)

    Kenyon, Robert V.

    1995-01-01

    Virtual reality may best be defined as the wide-field presentation of computer-generated, multi-sensory information that tracks a user in real time. In addition to the more well-known modes of virtual reality -- head-mounted displays and boom-mounted displays -- the Electronic Visualization Laboratory at the University of Illinois at Chicago recently introduced a third mode: a room constructed from large screens on which the graphics are projected on to three walls and the floor. The CAVE is a multi-person, room sized, high resolution, 3D video and audio environment. Graphics are rear projected in stereo onto three walls and the floor, and viewed with stereo glasses. As a viewer wearing a location sensor moves within its display boundaries, the correct perspective and stereo projections of the environment are updated, and the image moves with and surrounds the viewer. The other viewers in the CAVE are like passengers in a bus, along for the ride. 'CAVE,' the name selected for the virtual reality theater, is both a recursive acronym (Cave Automatic Virtual Environment) and a reference to 'The Simile of the Cave' found in Plato's 'Republic,' in which the philosopher explores the ideas of perception, reality, and illusion. Plato used the analogy of a person facing the back of a cave alive with shadows that are his/her only basis for ideas of what real objects are. Rather than having evolved from video games or flight simulation, the CAVE has its motivation rooted in scientific visualization and the SIGGRAPH 92 Showcase effort. The CAVE was designed to be a useful tool for scientific visualization. The Showcase event was an experiment; the Showcase chair and committee advocated an environment for computational scientists to interactively present their research at a major professional conference in a one-to-many format on high-end workstations attached to large projection screens. The CAVE was developed as a 'virtual reality theater' with scientific content and projection that met the criteria of Showcase.

  13. Vacuum force

    NASA Astrophysics Data System (ADS)

    Han, Yongquan

    2015-03-01

    To study on vacuum force, we must clear what is vacuum, vacuum is a space do not have any air and also ray. There is not exist an absolute the vacuum of space. The vacuum of space is relative, so that the vacuum force is relative. There is a certain that vacuum vacuum space exists. In fact, the vacuum space is relative, if the two spaces compared to the existence of relative vacuum, there must exist a vacuum force, and the direction of the vacuum force point to the vacuum region. Any object rotates and radiates. Rotate bend radiate- centripetal, gravity produced, relative gravity; non gravity is the vacuum force. Gravity is centripetal, is a trend that the objects who attracted wants to Centripetal, or have been do Centripetal movement. Any object moves, so gravity makes the object curve movement, that is to say, the radiation range curve movement must be in the gravitational objects, gravity must be existed in non vacuum region, and make the object who is in the region of do curve movement (for example: The earth moves around the sun), or final attracted in the form gravitational objects, and keep relatively static with attract object. (for example: objects on the earth moves but can't reach the first cosmic speed).

  14. Dynamic polarization vision in mantis shrimps

    PubMed Central

    Daly, Ilse M.; How, Martin J.; Partridge, Julian C.; Temple, Shelby E.; Marshall, N. Justin; Cronin, Thomas W.; Roberts, Nicholas W.

    2016-01-01

    Gaze stabilization is an almost ubiquitous animal behaviour, one that is required to see the world clearly and without blur. Stomatopods, however, only fix their eyes on scenes or objects of interest occasionally. Almost uniquely among animals they explore their visual environment with a series pitch, yaw and torsional (roll) rotations of their eyes, where each eye may also move largely independently of the other. In this work, we demonstrate that the torsional rotations are used to actively enhance their ability to see the polarization of light. Both Gonodactylus smithii and Odontodactylus scyllarus rotate their eyes to align particular photoreceptors relative to the angle of polarization of a linearly polarized visual stimulus, thereby maximizing the polarization contrast between an object of interest and its background. This is the first documented example of any animal displaying dynamic polarization vision, in which the polarization information is actively maximized through rotational eye movements. PMID:27401817

  15. Spatiotemporal motion boundary detection and motion boundary velocity estimation for tracking moving objects with a moving camera: a level sets PDEs approach with concurrent camera motion compensation.

    PubMed

    Feghali, Rosario; Mitiche, Amar

    2004-11-01

    The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.

  16. Fan filters, the 3-D Radon transform, and image sequence analysis.

    PubMed

    Marzetta, T L

    1994-01-01

    This paper develops a theory for the application of fan filters to moving objects. In contrast to previous treatments of the subject based on the 3-D Fourier transform, simplicity and insight are achieved by using the 3-D Radon transform. With this point of view, the Radon transform decomposes the image sequence into a set of plane waves that are parameterized by a two-component slowness vector. Fan filtering is equivalent to a multiplication in the Radon transform domain by a slowness response function, followed by an inverse Radon transform. The plane wave representation of a moving object involves only a restricted set of slownesses such that the inner product of the plane wave slowness vector and the moving object velocity vector is equal to one. All of the complexity in the application of fan filters to image sequences results from the velocity-slowness mapping not being one-to-one; therefore, the filter response cannot be independently specified at all velocities. A key contribution of this paper is to elucidate both the power and the limitations of fan filtering in this new application. A potential application of 3-D fan filters is in the detection of moving targets in clutter and noise. For example, an appropriately designed fan filter can reject perfectly all moving objects whose speed, irrespective of heading, is less than a specified cut-off speed, with only minor attenuation of significantly faster objects. A simple geometric construction determines the response of the filter for speeds greater than the cut-off speed.

  17. Robust mobility in human-populated environments

    NASA Astrophysics Data System (ADS)

    Gonzalez, Juan Pablo; Phillips, Mike; Neuman, Brad; Likhachev, Max

    2012-06-01

    Creating robots that can help humans in a variety of tasks requires robust mobility and the ability to safely navigate among moving obstacles. This paper presents an overview of recent research in the Robotics Collaborative Technology Alliance (RCTA) that addresses many of the core requirements for robust mobility in human-populated environments. Safe Interval Path Planning (SIPP) allows for very fast planning in dynamic environments when planning timeminimal trajectories. Generalized Safe Interval Path Planning extends this concept to trajectories that minimize arbitrary cost functions. Finally, generalized PPCP algorithm is used to generate plans that reason about the uncertainty in the predicted trajectories of moving obstacles and try to actively disambiguate the intentions of humans whenever necessary. We show how these approaches consider moving obstacles and temporal constraints and produce high-fidelity paths. Experiments in simulated environments show the performance of the algorithms under different controlled conditions, and experiments on physical mobile robots interacting with humans show how the algorithms perform under the uncertainties of the real world.

  18. Changes in Objectively-Determined Walkability and Physical Activity in Adults: A Quasi-Longitudinal Residential Relocation Study.

    PubMed

    McCormack, Gavin R; McLaren, Lindsay; Salvo, Grazia; Blackstaffe, Anita

    2017-05-22

    Causal evidence for the built environment's role in supporting physical activity is needed to inform land use and transportation policies. This quasi-longitudinal residential relocation study compared within-person changes in self-reported transportation walking, transportation cycling, and overall physical activity during the past 12 months among adults who did and did not move to a different neighbourhood. In 2014, a random sample of adults from 12 neighbourhoods (Calgary, AB, Canada) with varying urban form and socioeconomic status provided complete self-administered questionnaire data ( n = 915). Participants, some of whom moved neighbourhood during the past 12 months ( n = 95), reported their perceived change in transportation walking and cycling, and overall physical activity during that period. The questionnaire also captured residential self-selection, and sociodemographic and health characteristics. Walk Scores ® were linked to each participant's current and previous neighbourhood and three groups identified: walkability "improvers" ( n = 48); "decliners" ( n = 47), and; "maintainers" ( n = 820). Perceived change in physical activity was compared between the three groups using propensity score covariate-adjusted Firth logistic regression (odds ratios: OR). Compared with walkability maintainers, walkability decliners (OR 4.37) and improvers (OR 4.14) were more likely ( p < 0.05) to report an increase in their transportation walking since moving neighbourhood, while walkability decliners were also more likely (OR 3.17) to report decreasing their transportation walking since moving. Walkability improvers were more likely than maintainers to increase their transportation cycling since moving neighbourhood (OR 4.22). Temporal changes in neighbourhood walkability resulting from residential relocation appear to be associated with reported temporal changes in transportation walking and cycling in adults.

  19. How a visual surveillance system hypothesizes how you behave.

    PubMed

    Micheloni, C; Piciarelli, C; Foresti, G L

    2006-08-01

    In the last few years, the installation of a large number of cameras has led to a need for increased capabilities in video surveillance systems. It has, indeed, been more and more necessary for human operators to be helped in the understanding of ongoing activities in real environments. Nowadays, the technology and the research in the machine vision and artificial intelligence fields allow one to expect a new generation of completely autonomous systems able to reckon the behaviors of entities such as pedestrians, vehicles, and so forth. Hence, whereas the sensing aspect of these systems has been the issue considered the most so far, research is now focused mainly on more newsworthy problems concerning understanding. In this article, we present a novel method for hypothesizing the evolution of behavior. For such purposes, the system is required to extract useful information by means of low-level techniques for detecting and maintaining track of moving objects. The further estimation of performed trajectories, together with objects classification, enables one to compute the probability distribution of the normal activities (e.g., trajectories). Such a distribution is defined by means of a novel clustering technique. The resulting clusters are used to estimate the evolution of objects' behaviors and to speculate about any intention to act dangerously. The provided solution for hypothesizing behaviors occurring in real environments was tested in the context of an outdoor parking lot

  20. Illusory object motion in the centre of a radial pattern: The Pursuit-Pursuing illusion.

    PubMed

    Ito, Hiroyuki

    2012-01-01

    A circular object placed in the centre of a radial pattern consisting of thin sectors was found to cause a robust motion illusion. During eye-movement pursuit of a moving target, the presently described stimulus produced illusory background-object motion in the same direction as that of the eye movement. In addition, the display induced illusory stationary perception of a moving object against the whole display motion. In seven experiments, the characteristics of the illusion were examined in terms of luminance relationships and figural characteristics of the radial pattern. Some potential explanations for these findings are discussed.

  1. Interpretation of the function of the striate cortex

    NASA Astrophysics Data System (ADS)

    Garner, Bernardette M.; Paplinski, Andrew P.

    2000-04-01

    Biological neural networks do not require retraining every time objects move in the visual field. Conventional computer neural networks do not share this shift-invariance. The brain compensates for movements in the head, body, eyes and objects by allowing the sensory data to be tracked across the visual field. The neurons in the striate cortex respond to objects moving across the field of vision as is seen in many experiments. It is proposed, that the neurons in the striate cortex allow continuous angle changes needed to compensate for changes in orientation of the head, eyes and the motion of objects in the field of vision. It is hypothesized that the neurons in the striate cortex form a system that allows for the translation, some rotation and scaling of objects and provides a continuity of objects as they move relative to other objects. The neurons in the striate cortex respond to features which are fundamental to sight, such as orientation of lines, direction of motion, color and contrast. The neurons that respond to these features are arranged on the cortex in a way that depends on the features they are responding to and on the area of the retina from which they receive their inputs.

  2. Exhausting Attentional Tracking Resources with a Single Fast-Moving Object

    ERIC Educational Resources Information Center

    Holcombe, Alex O.; Chen, Wei-Ying

    2012-01-01

    Driving on a busy road, eluding a group of predators, or playing a team sport involves keeping track of multiple moving objects. In typical laboratory tasks, the number of visual targets that humans can track is about four. Three types of theories have been advanced to explain this limit. The fixed-limit theory posits a set number of attentional…

  3. Algorithms for detection of objects in image sequences captured from an airborne imaging system

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak

    1995-01-01

    This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.

  4. Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects.

    PubMed

    Kang, Ziho; Mandal, Saptarshi; Crutchfield, Jerry; Millan, Angel; McClung, Sarah N

    2016-01-01

    Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance.

  5. Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects

    PubMed Central

    Mandal, Saptarshi

    2016-01-01

    Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance. PMID:27725830

  6. A solution for two-dimensional mazes with use of chaotic dynamics in a recurrent neural network model.

    PubMed

    Suemitsu, Yoshikazu; Nara, Shigetoshi

    2004-09-01

    Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.

  7. Gamma-Ray imaging for nuclear security and safety: Towards 3-D gamma-ray vision

    NASA Astrophysics Data System (ADS)

    Vetter, Kai; Barnowksi, Ross; Haefner, Andrew; Joshi, Tenzing H. Y.; Pavlovsky, Ryan; Quiter, Brian J.

    2018-01-01

    The development of portable gamma-ray imaging instruments in combination with the recent advances in sensor and related computer vision technologies enable unprecedented capabilities in the detection, localization, and mapping of radiological and nuclear materials in complex environments relevant for nuclear security and safety. Though multi-modal imaging has been established in medicine and biomedical imaging for some time, the potential of multi-modal data fusion for radiological localization and mapping problems in complex indoor and outdoor environments remains to be explored in detail. In contrast to the well-defined settings in medical or biological imaging associated with small field-of-view and well-constrained extension of the radiation field, in many radiological search and mapping scenarios, the radiation fields are not constrained and objects and sources are not necessarily known prior to the measurement. The ability to fuse radiological with contextual or scene data in three dimensions, in analog to radiological and functional imaging with anatomical fusion in medicine, provides new capabilities enhancing image clarity, context, quantitative estimates, and visualization of the data products. We have developed new means to register and fuse gamma-ray imaging with contextual data from portable or moving platforms. These developments enhance detection and mapping capabilities as well as provide unprecedented visualization of complex radiation fields, moving us one step closer to the realization of gamma-ray vision in three dimensions.

  8. Identification of a self-paced hitting task in freely moving rats based on adaptive spike detection from multi-unit M1 cortical signals

    PubMed Central

    Hammad, Sofyan H. H.; Farina, Dario; Kamavuako, Ernest N.; Jensen, Winnie

    2013-01-01

    Invasive brain–computer interfaces (BCIs) may prove to be a useful rehabilitation tool for severely disabled patients. Although some systems have shown to work well in restricted laboratory settings, their usefulness must be tested in less controlled environments. Our objective was to investigate if a specific motor task could reliably be detected from multi-unit intra-cortical signals from freely moving animals. Four rats were trained to hit a retractable paddle (defined as a “hit”). Intra-cortical signals were obtained from electrodes placed in the primary motor cortex. First, the signal-to-noise ratio was increased by wavelet denoising. Action potentials were then detected using an adaptive threshold, counted in three consecutive time intervals and were used as features to classify either a “hit” or a “no-hit” (defined as an interval between two “hits”). We found that a “hit” could be detected with an accuracy of 75 ± 6% when wavelet denoising was applied whereas the accuracy dropped to 62 ± 5% without prior denoising. We compared our approach with the common daily practice in BCI that consists of using a fixed, manually selected threshold for spike detection without denoising. The results showed the feasibility of detecting a motor task in a less restricted environment than commonly applied within invasive BCI research. PMID:24298254

  9. A Mixed-Methods Evaluation of the "Move It Move It!" Before-School Incentive-Based Physical Activity Programme

    ERIC Educational Resources Information Center

    Garnett, Bernice R.; Becker, Kelly; Vierling, Danielle; Gleason, Cara; DiCenzo, Danielle; Mongeon, Louise

    2017-01-01

    Objective: Less than half of young people in the USA are meeting the daily physical activity requirements of at least 60 minutes of moderate or vigorous physical activity. A mixed-methods pilot feasibility assessment of "Move it Move it!" was conducted in the Spring of 2014 to assess the impact of a before-school physical activity…

  10. MoveU? Assessing a Social Marketing Campaign to Promote Physical Activity

    ERIC Educational Resources Information Center

    Scarapicchia, Tanya M. F.; Sabiston, Catherine M. F.; Brownrigg, Michelle; Blackburn-Evans, Althea; Cressy, Jill; Robb, Janine; Faulkner, Guy E. J.

    2015-01-01

    Objective: MoveU is a social marketing initiative aimed at increasing moderate-to-vigorous physical activity (MVPA) among undergraduate students. Using the Hierarchy of Effects model (HOEM), this study identified awareness of MoveU and examined associations between awareness, outcome expectations, self-efficacy, intentions, and MVPA. Participants:…

  11. A Modification to the Computer Generated Acquisition Documents System (CGADS) for Microcomputer Use in a Program Office Environment.

    DTIC Science & Technology

    1985-09-01

    FILL. MOVE ALPHA-RESPONSE TO RESPONSE. 221C-RUN-TASKS-EXIT. EXIT. 2220-DISPLAY-TASK-MENU. PERFORM 5000- OETER -NISC-TASK-VALS. MOVE 1 TO ANSWER-FILE-KEY...INDEX-FIELD-2 ELSE MOVE 4 TO ANSWER-FILE-KEY SUBTRACT 200 FROM INDEX-FIELD-2. 5000- OETER -MISC-TASK-VALS. IF AREA-NUMBER a ŕ" MOVE 1 TO TASK-FILE-REC-NUM

  12. Integrated VR platform for 3D and image-based models: a step toward interactive image-based virtual environments

    NASA Astrophysics Data System (ADS)

    Yoon, Jayoung; Kim, Gerard J.

    2003-04-01

    Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al., these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit, to accommodate new node types for environment maps billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, it if exists. Before rendering, objects are conservatively culled from the view frustum using the representation with the largest volume. Finally, we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  13. Method and System for Object Recognition Search

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A. (Inventor); Duong, Vu A. (Inventor); Stubberud, Allen R. (Inventor)

    2012-01-01

    A method for object recognition using shape and color features of the object to be recognized. An adaptive architecture is used to recognize and adapt the shape and color features for moving objects to enable object recognition.

  14. Motion Alters Color Appearance

    PubMed Central

    Hong, Sang-Wook; Kang, Min-Suk

    2016-01-01

    Chromatic induction compellingly demonstrates that chromatic context as well as spectral lights reflected from an object determines its color appearance. Here, we show that when one colored object moves around an identical stationary object, the perceived saturation of the stationary object decreases dramatically whereas the saturation of the moving object increases. These color appearance shifts in the opposite directions suggest that normalization induced by the object’s motion may mediate the shift in color appearance. We ruled out other plausible alternatives such as local adaptation, attention, and transient neural responses that could explain the color shift without assuming interaction between color and motion processing. These results demonstrate that the motion of an object affects both its own color appearance and the color appearance of a nearby object, suggesting a tight coupling between color and motion processing. PMID:27824098

  15. A Comparison of Reimbursement Recommendations by European HTA Agencies: Is There Opportunity for Further Alignment?

    PubMed Central

    Allen, Nicola; Liberti, Lawrence; Walker, Stuart R.; Salek, Sam

    2017-01-01

    Introduction: In Europe and beyond, the rising costs of healthcare and limited healthcare resources have resulted in the implementation of health technology assessment (HTA) to inform health policy and reimbursement decision-making. European legislation has provided a harmonized route for the regulatory process with the European Medicines Agency, but reimbursement decision-making still remains the responsibility of each country. There is a recognized need to move toward a more objective and collaborative reimbursement environment for new medicines in Europe. Therefore, the aim of this study was to objectively assess and compare the national reimbursement recommendations of 9 European jurisdictions following European Medicines Agency (EMA) recommendation for centralized marketing authorization. Methods: Using publicly available data and newly developed classification tools, this study appraised 9 European reimbursement systems by assessing HTA processes and the relationship between the regulatory, HTA and decision-making organizations. Each national HTA agency was classified according to two novel taxonomies. The System taxonomy, focuses on the position of the HTA agency within the national reimbursement system according to the relationship between the regulator, the HTA-performing agency, and the reimbursement decision-making coverage body. The HTA Process taxonomy distinguishes between the individual HTA agency's approach to economic and therapeutic evaluation and the inclusion of an independent appraisal step. The taxonomic groups were subsequently compared with national HTA recommendations. Results: This study identified European national reimbursement recommendations for 102 new active substances (NASs) approved by the EMA from 2008 to 2012. These reimbursement recommendations were compared using a novel classification tool and identified alignment between the organizational structure of reimbursement systems (System taxonomy) and HTA recommendations. However, there was less alignment between the HTA processes and recommendations. Conclusions: In order to move forward to a more harmonized HTA environment within Europe, it is first necessary to understand the variation in HTA practices within Europe. This study has identified alignment between HTA recommendations and the System taxonomy and one of the major implications of this study is that such alignment could support a more collaborative HTA environment in Europe. PMID:28713265

  16. A Comparison of Reimbursement Recommendations by European HTA Agencies: Is There Opportunity for Further Alignment?

    PubMed

    Allen, Nicola; Liberti, Lawrence; Walker, Stuart R; Salek, Sam

    2017-01-01

    Introduction: In Europe and beyond, the rising costs of healthcare and limited healthcare resources have resulted in the implementation of health technology assessment (HTA) to inform health policy and reimbursement decision-making. European legislation has provided a harmonized route for the regulatory process with the European Medicines Agency, but reimbursement decision-making still remains the responsibility of each country. There is a recognized need to move toward a more objective and collaborative reimbursement environment for new medicines in Europe. Therefore, the aim of this study was to objectively assess and compare the national reimbursement recommendations of 9 European jurisdictions following European Medicines Agency (EMA) recommendation for centralized marketing authorization. Methods: Using publicly available data and newly developed classification tools, this study appraised 9 European reimbursement systems by assessing HTA processes and the relationship between the regulatory, HTA and decision-making organizations. Each national HTA agency was classified according to two novel taxonomies. The System taxonomy, focuses on the position of the HTA agency within the national reimbursement system according to the relationship between the regulator, the HTA-performing agency, and the reimbursement decision-making coverage body. The HTA Process taxonomy distinguishes between the individual HTA agency's approach to economic and therapeutic evaluation and the inclusion of an independent appraisal step. The taxonomic groups were subsequently compared with national HTA recommendations. Results: This study identified European national reimbursement recommendations for 102 new active substances (NASs) approved by the EMA from 2008 to 2012. These reimbursement recommendations were compared using a novel classification tool and identified alignment between the organizational structure of reimbursement systems (System taxonomy) and HTA recommendations. However, there was less alignment between the HTA processes and recommendations. Conclusions: In order to move forward to a more harmonized HTA environment within Europe, it is first necessary to understand the variation in HTA practices within Europe. This study has identified alignment between HTA recommendations and the System taxonomy and one of the major implications of this study is that such alignment could support a more collaborative HTA environment in Europe.

  17. Direct imaging and new technologies to search for substellar companions around MGs cool dwarfs

    NASA Astrophysics Data System (ADS)

    Gálvez-Ortiz, M. C.; Clarke, J. R. A.; Pinfield, D. J.; Folkes, S. L.; Jenkins, J. S.; García Pérez, A. E.; Burningham, B.; Day-Jones, A. C.; Jones, H. R. A.

    2011-07-01

    We describe here our project based in a search for sub-stellar companions (brown dwarfs and exo-planets) around young ultra-cool dwarfs (UCDs) and characterise their properties. We will use current and future technology (high contrast imaging, high-precision Doppler determinations) from the ground and space (VLT, ELT and JWST), to find companions to young objects. Members of young moving groups (MGs) have clear advantages in this field. We compiled a catalogue of young UCD objects and studied their membership to five known young moving groups: Local Association (Pleiades moving group, 20-150 Myr), Ursa Mayor group (Sirius supercluster, 300 Myr), Hyades supercluster (600 Myr), IC 2391 supercluster (35 Myr) and Castor moving group (200 Myr). To assess them as members we used different kinematic and spectroscopic criteria.

  18. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  19. Achieving built-environment and active living goals through Music City Moves.

    PubMed

    Omishakin, Adetokunbo A; Carlat, Jennifer L; Hornsby, Shannon; Buck, Tracy

    2009-12-01

    Nashville, Tennessee, formed Music City Moves (MCM), an interdisciplinary, countywide partnership to implement its vision for the community: a metropolitan region where routine physical activity is a fundamental part of daily life for all residents. Music City Moves' main focus was the pursuit of changes in community planning policies to help shape Nashville's built environment and facilitate walking and bicycling. To complement this focus, MCM developed a suite of health programs to support physical activity in high-risk populations and a countywide promotional campaign designed to increase awareness and get people active through event participation. Nashville made considerable strides in improving policies and regulations related to building and site design to improve the built environment for pedestrians and cyclists, including passage of (1) specific plan zoning; (2) revised subdivision regulations that introduced a "walkable subdivision" option for developers; and (3) a community-character manual that will guide future land-use planning. Programs and promotions have increased awareness and participation, and the Tour de Nash bike/walk event showcases yearly changes in the built environment. Political leadership has been critical to MCM's success. Leadership of the partnership by the planning department facilitated regulatory changes in planning policies. Music City Moves has accelerated Nashville's movement to improve the built environment and encourage active living. The beneficial impact of policy changes will continue to be manifested in coming years; however, ongoing political support and education of stakeholders in the planning process will be necessary to ensure that planning policies are fully implemented.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winnek, D.F.

    A method and apparatus for making X-ray photographs which can be viewed in three dimensions with the use of a lenticular screen. The apparatus includes a linear tomograph having a moving X-ray source on one side of a support on which an object is to be placed so that X-rays can pass through the object to the opposite side of the support. A movable cassette on the opposite side of the support moves in a direction opposite to the direction of travel of the X-ray source as the source moves relative to the support. The cassette has an intensifying screen,more » a grating mask provided with uniformly spaced slots for passing X-rays, a lenticular member adjacent to the mask, and a photographic emulsion adjacent to the opposite side of the lenticular member. The cassette has a power device for moving the lenticular member and the emulsion relative to the mask a distance equal to the spacing between a pair of adjacent slots in the mask. The X-rays from the source, after passing through an object on the support, pass into the cassette through the slots of the mask and are focused on the photographic emulsion to result in a continuum of X-ray views of the object. When the emulsion is developed and viewed through the lenticular member, the object can be seen in three dimensions.« less

  1. Moving towards Inclusion? The First-Degree Results of Students with and without Disabilities in Higher Education in the UK: 1998-2005

    ERIC Educational Resources Information Center

    Pumfrey, Peter

    2008-01-01

    Is the currently selective UK higher education (HE) system becoming more inclusive? Between 1998/99 and 2004/05, in relation to talented students with disabilities, has the UK government's HE policy implementation moved HE towards achieving two of the government's key HE objectives for 2010? These objectives are: (a) increasing HE participation…

  2. Image registration of naval IR images

    NASA Astrophysics Data System (ADS)

    Rodland, Arne J.

    1996-06-01

    In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.

  3. Command Wire Sensor Measurements

    DTIC Science & Technology

    2012-09-01

    coupled with the extreme harsh terrain has meant that few of these techniques have proved robust enough when moved from the laboratory to the field...to image stationary objects and does not accurately image moving targets. Moving targets can be seriously distorted and displaced from their true...battlefield and for imaging of fixed targets. Moving targets can be detected with a SAR if they have a Doppler frequency shift greater than the

  4. The Effects of Exposure to Better Neighborhoods on Children: New Evidence from the Moving to Opportunity Experiment.

    PubMed

    Chetty, Raj; Hendren, Nathaniel; Katz, Lawrence F

    2016-04-01

    The Moving to Opportunity (MTO) experiment offered randomly selected families housing vouchers to move from high-poverty housing projects to lower-poverty neighborhoods. We analyze MTO's impacts on children's long-term outcomes using tax data. We find that moving to a lower-poverty neighborhood when young (before age 13) increases college attendance and earnings and reduces single parenthood rates. Moving as an adolescent has slightly negative impacts, perhaps because of disruption effects. The decline in the gains from moving with the age when children move suggests that the duration of exposure to better environments during childhood is an important determinant of children’s long-term outcomes.

  5. Multiple operating system rotation environment moving target defense

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Nathaniel; Thompson, Michael

    Systems and methods for providing a multiple operating system rotation environment ("MORE") moving target defense ("MTD") computing system are described. The MORE-MTD system provides enhanced computer system security through a rotation of multiple operating systems. The MORE-MTD system increases attacker uncertainty, increases the cost of attacking the system, reduces the likelihood of an attacker locating a vulnerability, and reduces the exposure time of any located vulnerability. The MORE-MTD environment is effectuated by rotation of the operating systems at a given interval. The rotating operating systems create a consistently changing attack surface for remote attackers.

  6. Congruity Effects in Time and Space: Behavioral and ERP Measures

    ERIC Educational Resources Information Center

    Teuscher, Ursina; McQuire, Marguerite; Collins, Jennifer; Coulson, Seana

    2008-01-01

    Two experiments investigated whether motion metaphors for time affected the perception of spatial motion. Participants read sentences either about literal motion through space or metaphorical motion through time written from either the ego-moving or object-moving perspective. Each sentence was followed by a cartoon clip. Smiley-moving clips showed…

  7. Context-aware pattern discovery for moving object trajectories

    NASA Astrophysics Data System (ADS)

    Sharif, Mohammad; Asghar Alesheikh, Ali; Kaffash Charandabi, Neda

    2018-05-01

    Movement of point objects are highly sensitive to the underlying situations and conditions during the movement, which are known as contexts. Analyzing movement patterns, while accounting the contextual information, helps to better understand how point objects behave in various contexts and how contexts affect their trajectories. One potential solution for discovering moving objects patterns is analyzing the similarities of their trajectories. This article, therefore, contextualizes the similarity measure of trajectories by not only their spatial footprints but also a notion of internal and external contexts. The dynamic time warping (DTW) method is employed to assess the multi-dimensional similarities of trajectories. Then, the results of similarity searches are utilized in discovering the relative movement patterns of the moving point objects. Several experiments are conducted on real datasets that were obtained from commercial airplanes and the weather information during the flights. The results yielded the robustness of DTW method in quantifying the commonalities of trajectories and discovering movement patterns with 80 % accuracy. Moreover, the results revealed the importance of exploiting contextual information because it can enhance and restrict movements.

  8. Tracking moving targets behind a scattering medium via speckle correlation.

    PubMed

    Guo, Chengfei; Liu, Jietao; Wu, Tengfei; Zhu, Lei; Shao, Xiaopeng

    2018-02-01

    Tracking moving targets behind a scattering medium is a challenge, and it has many important applications in various fields. Owing to the multiple scattering, instead of the object image, only a random speckle pattern can be received on the camera when light is passing through highly scattering layers. Significantly, an important feature of a speckle pattern has been found, and it showed the target information can be derived from the speckle correlation. In this work, inspired by the notions used in computer vision and deformation detection, by specific simulations and experiments, we demonstrate a simple object tracking method, in which by using the speckle correlation, the movement of a hidden object can be tracked in the lateral direction and axial direction. In addition, the rotation state of the moving target can also be recognized by utilizing the autocorrelation of a speckle. This work will be beneficial for biomedical applications in the fields of quantitative analysis of the working mechanisms of a micro-object and the acquisition of dynamical information of the micro-object motion.

  9. Sustained multifocal attentional enhancement of stimulus processing in early visual areas predicts tracking performance.

    PubMed

    Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K

    2013-03-20

    Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.

  10. Evaluation of an intelligent wheelchair system for older adults with cognitive impairments

    PubMed Central

    2013-01-01

    Background Older adults are the most prevalent wheelchair users in Canada. Yet, cognitive impairments may prevent an older adult from being allowed to use a powered wheelchair due to safety and usability concerns. To address this issue, an add-on Intelligent Wheelchair System (IWS) was developed to help older adults with cognitive impairments drive a powered wheelchair safely and effectively. When attached to a powered wheelchair, the IWS adds a vision-based anti-collision feature that prevents the wheelchair from hitting obstacles and a navigation assistance feature that plays audio prompts to help users manoeuvre around obstacles. Methods A two stage evaluation was conducted to test the efficacy of the IWS. Stage One: Environment of Use – the IWS’s anti-collision and navigation features were evaluated against objects found in a long-term care facility. Six different collision scenarios (wall, walker, cane, no object, moving and stationary person) and three different navigation scenarios (object on left, object on right, and no object) were performed. Signal detection theory was used to categorize the response of the system in each scenario. Stage Two: User Trials – single-subject research design was used to evaluate the impact of the IWS on older adults with cognitive impairment. Participants were asked to drive a powered wheelchair through a structured obstacle course in two phases: 1) with the IWS and 2) without the IWS. Measurements of safety and usability were taken and compared between the two phases. Visual analysis and phase averages were used to analyze the single-subject data. Results Stage One: The IWS performed correctly for all environmental anti-collision and navigation scenarios. Stage Two: Two participants completed the trials. The IWS was able to limit the number of collisions that occurred with a powered wheelchair and lower the perceived workload for driving a powered wheelchair. However, the objective performance (time to complete course) of users navigating their environment did not improve with the IWS. Conclusions This study shows the efficacy of the IWS in performing with a potential environment of use, and benefiting members of its desired user population to increase safety and lower perceived demands of powered wheelchair driving. PMID:23924489

  11. Metals, Health and the Environment – Emergence of Correlations Between Speciation and Effects

    PubMed Central

    Williams, David R.

    2004-01-01

    Over the last half-century both the identification of the causes of diseases and the use of inorganic compounds to treat such conditions have been considerably enlightened through our emerging capabilities to identify the pivotal chemical species involved. The ‘duty of care’ placed upon scientists to protect the environment from manufactured chemicals and to limit their effects upon humans therefrom is best realised from a speciation knowledge database. This paper discusses categorising chemicals in terms of their persistence, bioaccumulation, and toxicities and uses speciation information to optimise desirable effects of chemicals in several applications such as the manufacture of pulp for paper and in the foliar nutrition of crops. Simultaneously, the chemical wasting side effects of industrial overdosing is easily avoided if speciation approaches are used. The move towards new environmentally friendly ligand agents is described and methods of finding substitute agents (often combinations of two or more chemicals) to replace nonbiodegradable EDTA. The geosphere migration of metals through the environment is discussed in terms of speciation. Future objectives discussed include improved means of communicating speciation-based recommendations to decision makers. PMID:18365083

  12. Standards-Based Wireless Sensor Networking Protocols for Spaceflight Applications

    NASA Technical Reports Server (NTRS)

    Barton, Richard J.; Wagner, Raymond S.

    2009-01-01

    Wireless sensor networks (WSNs) have the capacity to revolutionize data gathering in both spaceflight and terrestrial applications. WSNs provide a huge advantage over traditional, wired instrumentation since they do not require wiring trunks to connect sensors to a central hub. This allows for easy sensor installation in hard to reach locations, easy expansion of the number of sensors or sensing modalities, and reduction in both system cost and weight. While this technology offers unprecedented flexibility and adaptability, implementing it in practice is not without its difficulties. Any practical WSN deployment must contend with a number of difficulties in its radio frequency (RF) environment. Multi-path reflections can distort signals, limit data rates, and cause signal fades that prevent nodes from having clear access to channels, especially in a closed environment such as a spacecraft. Other RF signal sources, such as wireless internet, voice, and data systems may contend with the sensor nodes for bandwidth. Finally, RF noise from electrical systems and periodic scattering from moving objects such as crew members will all combine to give an incredibly unpredictable, time-varying communication environment.

  13. Walking through doorways causes forgetting: Further explorations.

    PubMed

    Radvansky, Gabriel A; Krawietz, Sabine A; Tamplin, Andrea K

    2011-08-01

    Previous research using virtual environments has revealed a location-updating effect in which there is a decline in memory when people move from one location to another. Here we assess whether this effect reflects the influence of the experienced context, in terms of the degree of immersion of a person in an environment, as suggested by some work in spatial cognition, or by a shift in context. In Experiment 1, the degree of immersion was reduced by using smaller displays. In comparison, in Experiment 2 an actual, rather than a virtual, environment was used, to maximize immersion. Location-updating effects were observed under both of these conditions. In Experiment 3, the original encoding context was reinstated by having a person return to the original room in which objects were first encoded. However, inconsistent with an encoding specificity account, memory did not improve by reinstating this context. Finally, we did a further analysis of the results of this and previous experiments to assess the differential influence of foregrounding and retrieval interference. Overall, these data are interpreted in terms of the event horizon model of event cognition and memory.

  14. Detection and tracking of drones using advanced acoustic cameras

    NASA Astrophysics Data System (ADS)

    Busset, Joël.; Perrodin, Florian; Wellig, Peter; Ott, Beat; Heutschi, Kurt; Rühl, Torben; Nussbaumer, Thomas

    2015-10-01

    Recent events of drones flying over city centers, official buildings and nuclear installations stressed the growing threat of uncontrolled drone proliferation and the lack of real countermeasure. Indeed, detecting and tracking them can be difficult with traditional techniques. A system to acoustically detect and track small moving objects, such as drones or ground robots, using acoustic cameras is presented. The described sensor, is completely passive, and composed of a 120-element microphone array and a video camera. The acoustic imaging algorithm determines in real-time the sound power level coming from all directions, using the phase of the sound signals. A tracking algorithm is then able to follow the sound sources. Additionally, a beamforming algorithm selectively extracts the sound coming from each tracked sound source. This extracted sound signal can be used to identify sound signatures and determine the type of object. The described techniques can detect and track any object that produces noise (engines, propellers, tires, etc). It is a good complementary approach to more traditional techniques such as (i) optical and infrared cameras, for which the object may only represent few pixels and may be hidden by the blooming of a bright background, and (ii) radar or other echo-localization techniques, suffering from the weakness of the echo signal coming back to the sensor. The distance of detection depends on the type (frequency range) and volume of the noise emitted by the object, and on the background noise of the environment. Detection range and resilience to background noise were tested in both, laboratory environments and outdoor conditions. It was determined that drones can be tracked up to 160 to 250 meters, depending on their type. Speech extraction was also experimentally investigated: the speech signal of a person being 80 to 100 meters away can be captured with acceptable speech intelligibility.

  15. A robust approach towards unknown transformation, regional adjacency graphs, multigraph matching, segmentation video frames from unnamed aerial vehicles (UAV)

    NASA Astrophysics Data System (ADS)

    Gohatre, Umakant Bhaskar; Patil, Venkat P.

    2018-04-01

    In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.

  16. Passive acquisition of CLIPS rules

    NASA Technical Reports Server (NTRS)

    Kovarik, Vincent J., Jr.

    1991-01-01

    The automated acquisition of knowledge by machine has not lived up to expectations, and knowledge engineering remains a human intensive task. Part of the reason for the lack of success is the difference in the cognitive focus of the expert. The expert must shift his or her focus from the subject domain to that of the representation environment. In doing so this cognitive shift introduces opportunity for errors and omissions. Presented here is work that observes the expert interact with a simulation of the domain. The system logs changes in the simulation objects and the expert's actions in response to those changes. This is followed by the application of inductive reasoning to move the domain specific rules observed to general domain rules.

  17. Illusory object motion in the centre of a radial pattern: The Pursuit–Pursuing illusion

    PubMed Central

    Ito, Hiroyuki

    2012-01-01

    A circular object placed in the centre of a radial pattern consisting of thin sectors was found to cause a robust motion illusion. During eye-movement pursuit of a moving target, the presently described stimulus produced illusory background-object motion in the same direction as that of the eye movement. In addition, the display induced illusory stationary perception of a moving object against the whole display motion. In seven experiments, the characteristics of the illusion were examined in terms of luminance relationships and figural characteristics of the radial pattern. Some potential explanations for these findings are discussed. PMID:23145267

  18. Development and application of virtual reality for man/systems integration

    NASA Technical Reports Server (NTRS)

    Brown, Marcus

    1991-01-01

    While the graphical presentation of computer models signified a quantum leap over presentations limited to text and numbers, it still has the problem of presenting an interface barrier between the human user and the computer model. The user must learn a command language in order to orient themselves in the model. For example, to move left from the current viewpoint of the model, they might be required to type 'LEFT' at a keyboard. This command is fairly intuitive, but if the viewpoint moves far enough that there are no visual cues overlapping with the first view, the user does not know if the viewpoint has moved inches, feet, or miles to the left, or perhaps remained in the same position, but rotated to the left. Until the user becomes quite familiar with the interface language of the computer model presentation, they will be proned to lossing their bearings frequently. Even a highly skilled user will occasionally get lost in the model. A new approach to presenting type type of information is to directly interpret the user's body motions as the input language for determining what view to present. When the user's head turns 45 degrees to the left, the viewpoint should be rotated 45 degrees to the left. Since the head moves through several intermediate angles between the original view and the final one, several intermediate views should be presented, providing the user with a sense of continuity between the original view and the final one. Since the primary way a human physically interacts with their environment should monitor the movements of the user's hands and alter objects in the virtual model in a way consistent with the way an actual object would move when manipulated using the same hand movements. Since this approach to the man-computer interface closely models the same type of interface that humans have with the physical world, this type of interface is often called virtual reality, and the model is referred to as a virtual world. The task of this summer fellowship was to set up a virtual reality system at MSFC and begin applying it to some of the questions which concern scientists and engineers involved in space flight. A brief discussion of this work is presented.

  19. Meeting People's Needs in a Fully Interoperable Domotic Environment

    PubMed Central

    Miori, Vittorio; Russo, Dario; Concordia, Cesare

    2012-01-01

    The key idea underlying many Ambient Intelligence (AmI) projects and applications is context awareness, which is based mainly on their capacity to identify users and their locations. The actual computing capacity should remain in the background, in the periphery of our awareness, and should only move to the center if and when necessary. Computing thus becomes ‘invisible’, as it is embedded in the environment and everyday objects. The research project described herein aims to realize an Ambient Intelligence-based environment able to improve users' quality of life by learning their habits and anticipating their needs. This environment is part of an adaptive, context-aware framework designed to make today's incompatible heterogeneous domotic systems fully interoperable, not only for connecting sensors and actuators, but for providing comprehensive connections of devices to users. The solution is a middleware architecture based on open and widely recognized standards capable of abstracting the peculiarities of underlying heterogeneous technologies and enabling them to co-exist and interwork, without however eliminating their differences. At the highest level of this infrastructure, the Ambient Intelligence framework, integrated with the domotic sensors, can enable the system to recognize any unusual or dangerous situations and anticipate health problems or special user needs in a technological living environment, such as a house or a public space. PMID:22969322

  20. Autonomous Collision-Free Navigation of Microvehicles in Complex and Dynamically Changing Environments.

    PubMed

    Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph

    2017-09-26

    Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.

  1. Meeting people's needs in a fully interoperable domotic environment.

    PubMed

    Miori, Vittorio; Russo, Dario; Concordia, Cesare

    2012-01-01

    The key idea underlying many Ambient Intelligence (AmI) projects and applications is context awareness, which is based mainly on their capacity to identify users and their locations. The actual computing capacity should remain in the background, in the periphery of our awareness, and should only move to the center if and when necessary. Computing thus becomes 'invisible', as it is embedded in the environment and everyday objects. The research project described herein aims to realize an Ambient Intelligence-based environment able to improve users' quality of life by learning their habits and anticipating their needs. This environment is part of an adaptive, context-aware framework designed to make today's incompatible heterogeneous domotic systems fully interoperable, not only for connecting sensors and actuators, but for providing comprehensive connections of devices to users. The solution is a middleware architecture based on open and widely recognized standards capable of abstracting the peculiarities of underlying heterogeneous technologies and enabling them to co-exist and interwork, without however eliminating their differences. At the highest level of this infrastructure, the Ambient Intelligence framework, integrated with the domotic sensors, can enable the system to recognize any unusual or dangerous situations and anticipate health problems or special user needs in a technological living environment, such as a house or a public space.

  2. Speed skills: measuring the visual speed analyzing properties of primate MT neurons.

    PubMed

    Perrone, J A; Thiele, A

    2001-05-01

    Knowing the direction and speed of moving objects is often critical for survival. However, it is poorly understood how cortical neurons process the speed of image movement. Here we tested MT neurons using moving sine-wave gratings of different spatial and temporal frequencies, and mapped out the neurons' spatiotemporal frequency response profiles. The maps typically had oriented ridges of peak sensitivity as expected for speed-tuned neurons. The preferred speed estimate, derived from the orientation of the maps, corresponded well to the preferred speed when moving bars were presented. Thus, our data demonstrate that MT neurons are truly sensitive to the object speed. These findings indicate that MT is not only a key structure in the analysis of direction of motion and depth perception, but also in the analysis of object speed.

  3. Motion streaks do not influence the perceived position of stationary flashed objects.

    PubMed

    Pavan, Andrea; Bellacosa Marotti, Rosilari

    2012-01-01

    In the present study, we investigated whether motion streaks, produced by fast moving dots Geisler 1999, distort the positional map of stationary flashed objects producing the well-known motion-induced position shift illusion (MIPS). The illusion relies on motion-processing mechanisms that induce local distortions in the positional map of the stimulus which is derived by shape-processing mechanisms. To measure the MIPS, two horizontally offset Gaussian blobs, placed above and below a central fixation point, were flashed over two fields of dots moving in opposite directions. Subjects judged the position of the top Gaussian blob relative to the bottom one. The results showed that neither fast (motion streaks) nor slow moving dots influenced the perceived spatial position of the stationary flashed objects, suggesting that background motion does not interact with the shape-processing mechanisms involved in MIPS.

  4. Two-dimensional (2D) displacement measurement of moving objects using a new MEMS binocular vision system

    NASA Astrophysics Data System (ADS)

    Di, Si; Lin, Hui; Du, Ruxu

    2011-05-01

    Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.

  5. Gravity Influences the Visual Representation of Object Tilt in Parietal Cortex

    PubMed Central

    Angelaki, Dora E.

    2014-01-01

    Sensory systems encode the environment in egocentric (e.g., eye, head, or body) reference frames, creating inherently unstable representations that shift and rotate as we move. However, it is widely speculated that the brain transforms these signals into an allocentric, gravity-centered representation of the world that is stable and independent of the observer's spatial pose. Where and how this representation may be achieved is currently unknown. Here we demonstrate that a subpopulation of neurons in the macaque caudal intraparietal area (CIP) visually encodes object tilt in nonegocentric coordinates defined relative to the gravitational vector. Neuronal responses to the tilt of a visually presented planar surface were measured with the monkey in different spatial orientations (upright and rolled left/right ear down) and then compared. This revealed a continuum of representations in which planar tilt was encoded in a gravity-centered reference frame in approximately one-tenth of the comparisons, intermediate reference frames ranging between gravity-centered and egocentric in approximately two-tenths of the comparisons, and in an egocentric reference frame in less than half of the comparisons. Altogether, almost half of the comparisons revealed a shift in the preferred tilt and/or a gain change consistent with encoding object orientation in nonegocentric coordinates. Through neural network modeling, we further show that a purely gravity-centered representation of object tilt can be achieved directly from the population activity of CIP-like units. These results suggest that area CIP may play a key role in creating a stable, allocentric representation of the environment defined relative to an “earth-vertical” direction. PMID:25339732

  6. Position and orientation determination system and method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harpring, Lawrence J.; Farfan, Eduardo B.; Gordon, John R.

    A position determination system and method is provided that may be used for obtaining position and orientation information of a detector in a contaminated room. The system includes a detector, a sensor operably coupled to the detector, and a motor coupled to the sensor to move the sensor around the detector. A CPU controls the operation of the motor to move the sensor around the detector and determines distance and angle data from the sensor to an object. The method includes moving a sensor around the detector and measuring distance and angle data from the sensor to an object atmore » incremental positions around the detector.« less

  7. Coordination of multiple robot arms

    NASA Technical Reports Server (NTRS)

    Barker, L. K.; Soloway, D.

    1987-01-01

    Kinematic resolved-rate control from one robot arm is extended to the coordinated control of multiple robot arms in the movement of an object. The structure supports the general movement of one axis system (moving reference frame) with respect to another axis system (control reference frame) by one or more robot arms. The grippers of the robot arms do not have to be parallel or at any pre-disposed positions on the object. For multiarm control, the operator chooses the same moving and control reference frames for each of the robot arms. Consequently, each arm then moves as though it were carrying out the commanded motions by itself.

  8. Early Program Development

    NASA Image and Video Library

    1996-06-20

    Engineers at one of MSFC's vacuum chambers begin testing a microthruster model. The purpose of these tests are to collect sufficient data that will enabe NASA to develop microthrusters that will move the Space Shuttle, a future space station, or any other space related vehicle with the least amount of expended energy. When something is sent into outer space, the forces that try to pull it back to Earth (gravity) are very small so that it only requires a very small force to move very large objects. In space, a force equal to a paperclip can move an object as large as a car. Microthrusters are used to produce these small forces.

  9. The influence of visual motion on interceptive actions and perception.

    PubMed

    Marinovic, Welber; Plooy, Annaliese M; Arnold, Derek H

    2012-05-01

    Visual information is an essential guide when interacting with moving objects, yet it can also be deceiving. For instance, motion can induce illusory position shifts, such that a moving ball can seem to have bounced past its true point of contact with the ground. Some evidence suggests illusory motion-induced position shifts bias pointing tasks to a greater extent than they do perceptual judgments. This, however, appears at odds with other findings and with our success when intercepting moving objects. Here we examined the accuracy of interceptive movements and of perceptual judgments in relation to simulated bounces. Participants were asked to intercept a moving disc at its bounce location by positioning a virtual paddle, and then to report where the disc had landed. Results showed that interceptive actions were accurate whereas perceptual judgments were inaccurate, biased in the direction of motion. Successful interceptions necessitated accurate information concerning both the location and timing of the bounce, so motor planning evidently had privileged access to an accurate forward model of bounce timing and location. This would explain why people can be accurate when intercepting a moving object, but lack insight into the accurate information that had guided their actions when asked to make a perceptual judgment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Visual EKF-SLAM from Heterogeneous Landmarks †

    PubMed Central

    Esparza-Jiménez, Jorge Othón; Devy, Michel; Gordillo, José L.

    2016-01-01

    Many applications require the localization of a moving object, e.g., a robot, using sensory data acquired from embedded devices. Simultaneous localization and mapping from vision performs both the spatial and temporal fusion of these data on a map when a camera moves in an unknown environment. Such a SLAM process executes two interleaved functions: the front-end detects and tracks features from images, while the back-end interprets features as landmark observations and estimates both the landmarks and the robot positions with respect to a selected reference frame. This paper describes a complete visual SLAM solution, combining both point and line landmarks on a single map. The proposed method has an impact on both the back-end and the front-end. The contributions comprehend the use of heterogeneous landmark-based EKF-SLAM (the management of a map composed of both point and line landmarks); from this perspective, the comparison between landmark parametrizations and the evaluation of how the heterogeneity improves the accuracy on the camera localization, the development of a front-end active-search process for linear landmarks integrated into SLAM and the experimentation methodology. PMID:27070602

  11. Optical flow versus retinal flow as sources of information for flight guidance

    NASA Technical Reports Server (NTRS)

    Cutting, James E.

    1991-01-01

    The appropriate description is considered of visual information for flight guidance, optical flow vs. retinal flow. Most descriptions in the psychological literature are based on the optical flow. However, human eyes move and this movement complicates the issues at stake, particularly when movement of the observer is involved. The question addressed is whether an observer, whose eyes register only retinal flow, use information in optical flow. It is suggested that the observer cannot and does not reconstruct the image in optical flow; instead they use retinal flow. Retinal array is defined as the projections of a three space onto a point and beyond to a movable, nearly hemispheric sensing device, like the retina. Optical array is defined as the projection of a three space environment to a point within that space. And flow is defined as global motion as a field of vectors, best placed on a spherical projection surface. Specifically, flow is the mapping of the field of changes in position of corresponding points on objects in three space onto a point, where that point has moved in position.

  12. Space-based visual attention: a marker of immature selective attention in toddlers?

    PubMed

    Rivière, James; Brisson, Julie

    2014-11-01

    Various studies suggested that attentional difficulties cause toddlers' failure in some spatial search tasks. However, attention is not a unitary construct and this study investigated two attentional mechanisms: location selection (space-based attention) and object selection (object-based attention). We investigated how toddlers' attention is distributed in the visual field during a manual search task for objects moving out of sight, namely the moving boxes task. Results show that 2.5-year-olds who failed this task allocated more attention to the location of the relevant object than to the object itself. These findings suggest that in some manual search tasks the primacy of space-based attention over object-based attention could be a marker of immature selective attention in toddlers. © 2014 Wiley Periodicals, Inc.

  13. Fast generation of video holograms of three-dimensional moving objects using a motion compensation-based novel look-up table.

    PubMed

    Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo

    2013-05-06

    A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.

  14. A landmark effect in the perceived displacement of objects.

    PubMed

    Higgins, J Stephen; Wang, Ranxiao Frances

    2010-01-01

    Perceiving the displacement of an object after a visual distraction is an essential ability to interact with the world. Previous research has shown a bias to perceive the first object seen after a saccade as stable while the second one moving (landmark effect). The present study examines the generality and nature of this phenomenon. The landmark effect was observed in the absence of eye movements, when the two objects were obscured by a blank screen, a moving-pattern mask, or simply disappeared briefly before reappearing one after the other. The first reappearing object was not required to remain visible while the second object reappeared to induce the bias. The perceived direction of the displacement was mainly determined by the relative displacement of the two objects, suggesting that the landmark effect is primarily due to a landmark calibration mechanism.

  15. Moving in a moving medium: new perspectives on flight

    PubMed Central

    Shepard, Emily L. C.; Portugal, Steven J.

    2016-01-01

    One of the defining features of the aerial environment is its variability; air is almost never still. This has profound consequences for flying animals, affecting their flight stability, speed selection, energy expenditure and choice of flight path. All these factors have important implications for the ecology of flying animals, and the ecosystems they interact with, as well as providing bio-inspiration for the development of unmanned aerial vehicles. In this introduction, we touch on the factors that drive the variability in airflows, the scales of variability and the degree to which given airflows may be predictable. We then summarize how papers in this volume advance our understanding of the sensory, biomechanical, physiological and behavioural responses of animals to air flows. Overall, this provides insight into how flying animals can be so successful in this most fickle of environments. This article is part of the themed issue ‘Moving in a moving medium: new perspectives on flight’. PMID:27528772

  16. Depth-Based Detection of Standing-Pigs in Moving Noise Environments.

    PubMed

    Kim, Jinseong; Chung, Yeonwoo; Choi, Younchang; Sa, Jaewon; Kim, Heegon; Chung, Yongwha; Park, Daihee; Kim, Hakjae

    2017-11-29

    In a surveillance camera environment, the detection of standing-pigs in real-time is an important issue towards the final goal of 24-h tracking of individual pigs. In this study, we focus on depth-based detection of standing-pigs with "moving noises", which appear every night in a commercial pig farm, but have not been reported yet. We first apply a spatiotemporal interpolation technique to remove the moving noises occurring in the depth images. Then, we detect the standing-pigs by utilizing the undefined depth values around them. Our experimental results show that this method is effective for detecting standing-pigs at night, in terms of both cost-effectiveness (using a low-cost Kinect depth sensor) and accuracy (i.e., 94.47%), even with severe moving noises occluding up to half of an input depth image. Furthermore, without any time-consuming technique, the proposed method can be executed in real-time.

  17. Geo-Referenced Dynamic Pushbroom Stereo Mosaics for 3D and Moving Target Extraction - A New Geometric Approach

    DTIC Science & Technology

    2009-12-01

    facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . We use the fact that all the...a moving platform, we will have to naturally and effectively handle obvious motion parallax and object occlusions in order to be able to detect ...facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . Based on the above two

  18. Vibrissal touch sensing in the harbor seal (Phoca vitulina): how do seals judge size?

    PubMed

    Grant, Robyn; Wieskotten, Sven; Wengst, Nina; Prescott, Tony; Dehnhardt, Guido

    2013-06-01

    "Whisker specialists" such as rats, shrews, and seals actively employ their whiskers to explore their environments and extract object properties such as size, shape, and texture. It has been suggested that whiskers could be used to discriminate between different sized objects in one of two ways: (i) to use whisker positions, such as angular position, spread or amplitude to approximate size; or (ii) to calculate the number of whiskers that contact an object. This study describes in detail how two adult harbor seals use their whiskers to differentiate between three sizes of disk. The seals judged size very fast, taking <400 ms. In addition, they oriented their smaller, most rostral, ventral whiskers to the disks, so that more whiskers contacted the surface, complying to a maximal contact sensing strategy. Data from this study supports the suggestion that it is the number of whisker contacts that predict disk size, rather than how the whiskers are positioned (angular position), the degree to which they are moved (amplitude) or how spread out they are (angular spread).

  19. Target-locking acquisition with real-time confocal (TARC) microscopy.

    PubMed

    Lu, Peter J; Sims, Peter A; Oki, Hidekazu; Macarthur, James B; Weitz, David A

    2007-07-09

    We present a real-time target-locking confocal microscope that follows an object moving along an arbitrary path, even as it simultaneously changes its shape, size and orientation. This Target-locking Acquisition with Realtime Confocal (TARC) microscopy system integrates fast image processing and rapid image acquisition using a Nipkow spinning-disk confocal microscope. The system acquires a 3D stack of images, performs a full structural analysis to locate a feature of interest, moves the sample in response, and then collects the next 3D image stack. In this way, data collection is dynamically adjusted to keep a moving object centered in the field of view. We demonstrate the system's capabilities by target-locking freely-diffusing clusters of attractive colloidal particles, and activelytransported quantum dots (QDs) endocytosed into live cells free to move in three dimensions, for several hours. During this time, both the colloidal clusters and live cells move distances several times the length of the imaging volume.

  20. Predictive coding of visual object position ahead of moving objects revealed by time-resolved EEG decoding.

    PubMed

    Hogendoorn, Hinze; Burkitt, Anthony N

    2018-05-01

    Due to the delays inherent in neuronal transmission, our awareness of sensory events necessarily lags behind the occurrence of those events in the world. If the visual system did not compensate for these delays, we would consistently mislocalize moving objects behind their actual position. Anticipatory mechanisms that might compensate for these delays have been reported in animals, and such mechanisms have also been hypothesized to underlie perceptual effects in humans such as the Flash-Lag Effect. However, to date no direct physiological evidence for anticipatory mechanisms has been found in humans. Here, we apply multivariate pattern classification to time-resolved EEG data to investigate anticipatory coding of object position in humans. By comparing the time-course of neural position representation for objects in both random and predictable apparent motion, we isolated anticipatory mechanisms that could compensate for neural delays when motion trajectories were predictable. As well as revealing an early neural position representation (lag 80-90 ms) that was unaffected by the predictability of the object's trajectory, we demonstrate a second neural position representation at 140-150 ms that was distinct from the first, and that was pre-activated ahead of the moving object when it moved on a predictable trajectory. The latency advantage for predictable motion was approximately 16 ± 2 ms. To our knowledge, this provides the first direct experimental neurophysiological evidence of anticipatory coding in human vision, revealing the time-course of predictive mechanisms without using a spatial proxy for time. The results are numerically consistent with earlier animal work, and suggest that current models of spatial predictive coding in visual cortex can be effectively extended into the temporal domain. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Around Marshall

    NASA Image and Video Library

    1977-04-12

    Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. And construction methods had to be efficient due to limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built.Pictured is an experiment where the astronaut is required to move a large object which weighed 19,000 pounds. It was moved with realitive ease once the astronaut became familiar with his environment and his near weightless condition. Experiments of this nature provided scientists with the information needed regarding weight and mass allowances astronauts could manage in preparation for building a permanent space station in the future.

  2. Neutral Buoyancy Test NB-14 Large Space Structure Assembly

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Once the United States' space program had progressed from Earth's orbit into outerspace, the prospect of building and maintaining a permanent presence in space was realized. To accomplish this feat, NASA launched a temporary workstation, Skylab, to discover the effects of low gravity and weightlessness on the human body, and also to develop tools and equipment that would be needed in the future to build and maintain a more permanent space station. The structures, techniques, and work schedules had to be carefully designed to fit this unique construction site. The components had to be lightweight for transport into orbit, yet durable. The station also had to be made with removable parts for easy servicing and repairs by astronauts. All of the tools necessary for service and repairs had to be designed for easy manipulation by a suited astronaut. And construction methods had to be efficient due to limited time the astronauts could remain outside their controlled environment. In lieu of all the specific needs for this project, an environment on Earth had to be developed that could simulate a low gravity atmosphere. A Neutral Buoyancy Simulator (NBS) was constructed by NASA Marshall Space Flight Center (MSFC) in 1968. Since then, NASA scientists have used this facility to understand how humans work best in low gravity and also provide information about the different kinds of structures that can be built.Pictured is an experiment where the astronaut is required to move a large object which weighed 19,000 pounds. It was moved with realitive ease once the astronaut became familiar with his environment and his near weightless condition. Experiments of this nature provided scientists with the information needed regarding weight and mass allowances astronauts could manage in preparation for building a permanent space station in the future.

  3. Comparison of Oculus Rift and HTC Vive: Feasibility for Virtual Reality-Based Exploration, Navigation, Exergaming, and Rehabilitation.

    PubMed

    Borrego, Adrián; Latorre, Jorge; Alcañiz, Mariano; Llorens, Roberto

    2018-06-01

    The latest generation of head-mounted displays (HMDs) provides built-in head tracking, which enables estimating position in a room-size setting. This feature allows users to explore, navigate, and move within real-size virtual environments, such as kitchens, supermarket aisles, or streets. Previously, these actions were commonly facilitated by external peripherals and interaction metaphors. The objective of this study was to compare the Oculus Rift and the HTC Vive in terms of the working range of the head tracking and the working area, accuracy, and jitter in a room-size environment, and to determine their feasibility for serious games, rehabilitation, and health-related applications. The position of the HMDs was registered in a 10 × 10 grid covering an area of 25 m 2 at sitting (1.3 m) and standing (1.7 m) heights. Accuracy and jitter were estimated from positional data. The working range was estimated by moving the HMDs away from the cameras until no data were obtained. The HTC Vive provided a working area (24.87 m 2 ) twice as large as that of the Oculus Rift. Both devices showed excellent and comparable performance at sitting height (accuracy up to 1 cm and jitter <0.35 mm), and the HTC Vive presented worse but still excellent accuracy and jitter at standing height (accuracy up to 1.5 cm and jitter <0.5 mm). The HTC Vive presented a larger working range (7 m) than did the Oculus Rift (4.25 m). Our results support the use of these devices for real navigation, exploration, exergaming, and motor rehabilitation in virtual reality environments.

  4. Improved Object Detection Using a Robotic Sensing Antenna with Vibration Damping Control

    PubMed Central

    Feliu-Batlle, Vicente; Feliu-Talegon, Daniel; Castillo-Berrio, Claudia Fernanda

    2017-01-01

    Some insects or mammals use antennae or whiskers to detect by the sense of touch obstacles or recognize objects in environments in which other senses like vision cannot work. Artificial flexible antennae can be used in robotics to mimic this sense of touch in these recognition tasks. We have designed and built a two-degree of freedom (2DOF) flexible antenna sensor device to perform robot navigation tasks. This device is composed of a flexible beam, two servomotors that drive the beam and a load cell sensor that detects the contact of the beam with an object. It is found that the efficiency of such a device strongly depends on the speed and accuracy achieved by the antenna positioning system. These issues are severely impaired by the vibrations that appear in the antenna during its movement. However, these antennae are usually moved without taking care of these undesired vibrations. This article proposes a new closed-loop control schema that cancels vibrations and improves the free movements of the antenna. Moreover, algorithms to estimate the 3D beam position and the instant and point of contact with an object are proposed. Experiments are reported that illustrate the efficiency of these proposed algorithms and the improvements achieved in object detection tasks using a control system that cancels beam vibrations. PMID:28406449

  5. Improved Object Detection Using a Robotic Sensing Antenna with Vibration Damping Control.

    PubMed

    Feliu-Batlle, Vicente; Feliu-Talegon, Daniel; Castillo-Berrio, Claudia Fernanda

    2017-04-13

    Some insects or mammals use antennae or whiskers to detect by the sense of touch obstacles or recognize objects in environments in which other senses like vision cannot work. Artificial flexible antennae can be used in robotics to mimic this sense of touch in these recognition tasks. We have designed and built a two-degree of freedom (2DOF) flexible antenna sensor device to perform robot navigation tasks. This device is composed of a flexible beam, two servomotors that drive the beam and a load cell sensor that detects the contact of the beam with an object. It is found that the efficiency of such a device strongly depends on the speed and accuracy achieved by the antenna positioning system. These issues are severely impaired by the vibrations that appear in the antenna during its movement. However, these antennae are usually moved without taking care of these undesired vibrations. This article proposes a new closed-loop control schema that cancels vibrations and improves the free movements of the antenna. Moreover, algorithms to estimate the 3D beam position and the instant and point of contact with an object are proposed. Experiments are reported that illustrate the efficiency of these proposed algorithms and the improvements achieved in object detection tasks using a control system that cancels beam vibrations.

  6. Vision-based control for flight relative to dynamic environments

    NASA Astrophysics Data System (ADS)

    Causey, Ryan Scott

    The concept of autonomous systems has been considered an enabling technology for a diverse group of military and civilian applications. The current direction for autonomous systems is increased capabilities through more advanced systems that are useful for missions that require autonomous avoidance, navigation, tracking, and docking. To facilitate this level of mission capability, passive sensors, such as cameras, and complex software are added to the vehicle. By incorporating an on-board camera, visual information can be processed to interpret the surroundings. This information allows decision making with increased situational awareness without the cost of a sensor signature, which is critical in military applications. The concepts presented in this dissertation facilitate the issues inherent to vision-based state estimation of moving objects for a monocular camera configuration. The process consists of several stages involving image processing such as detection, estimation, and modeling. The detection algorithm segments the motion field through a least-squares approach and classifies motions not obeying the dominant trend as independently moving objects. An approach to state estimation of moving targets is derived using a homography approach. The algorithm requires knowledge of the camera motion, a reference motion, and additional feature point geometry for both the target and reference objects. The target state estimates are then observed over time to model the dynamics using a probabilistic technique. The effects of uncertainty on state estimation due to camera calibration are considered through a bounded deterministic approach. The system framework focuses on an aircraft platform of which the system dynamics are derived to relate vehicle states to image plane quantities. Control designs using standard guidance and navigation schemes are then applied to the tracking and homing problems using the derived state estimation. Four simulations are implemented in MATLAB that build on the image concepts present in this dissertation. The first two simulations deal with feature point computations and the effects of uncertainty. The third simulation demonstrates the open-loop estimation of a target ground vehicle in pursuit whereas the four implements a homing control design for the Autonomous Aerial Refueling (AAR) using target estimates as feedback.

  7. Familiar trajectories facilitate the interpretation of physical forces when intercepting a moving target.

    PubMed

    Mijatović, Antonija; La Scaleia, Barbara; Mercuri, Nicola; Lacquaniti, Francesco; Zago, Myrka

    2014-12-01

    Familiarity with the visual environment affects our expectations about the objects in a scene, aiding in recognition and interaction. Here we tested whether the familiarity with the specific trajectory followed by a moving target facilitates the interpretation of the effects of underlying physical forces. Participants intercepted a target sliding down either an inclined plane or a tautochrone. Gravity accelerated the target by the same amount in both cases, but the inclined plane represented a familiar trajectory whereas the tautochrone was unfamiliar to the participants. In separate sessions, the gravity field was consistent with either natural gravity or artificial reversed gravity. Target motion was occluded from view over the last segment. We found that the responses in the session with unnatural forces were systematically delayed relative to those with natural forces, but only for the inclined plane. The time shift is consistent with a bias for natural gravity, in so far as it reflects an a priori expectation that a target not affected by natural forces will arrive later than one accelerated downwards by gravity. Instead, we did not find any significant time shift with unnatural forces in the case of the tautochrone. We argue that interception of a moving target relies on the integration of the high-level cue of trajectory familiarity with low-level cues related to target kinematics.

  8. Moving object detection via low-rank total variation regularization

    NASA Astrophysics Data System (ADS)

    Wang, Pengcheng; Chen, Qian; Shao, Na

    2016-09-01

    Moving object detection is a challenging task in video surveillance. Recently proposed Robust Principal Component Analysis (RPCA) can recover the outlier patterns from the low-rank data under some mild conditions. However, the l-penalty in RPCA doesn't work well in moving object detection because the irrepresentable condition is often not satisfied. In this paper, a method based on total variation (TV) regularization scheme is proposed. In our model, image sequences captured with a static camera are highly related, which can be described using a low-rank matrix. Meanwhile, the low-rank matrix can absorb background motion, e.g. periodic and random perturbation. The foreground objects in the sequence are usually sparsely distributed and drifting continuously, and can be treated as group outliers from the highly-related background scenes. Instead of l-penalty, we exploit the total variation of the foreground. By minimizing the total variation energy, the outliers tend to collapse and finally converge to be the exact moving objects. The TV-penalty is superior to the l-penalty especially when the outlier is in the majority for some pixels, and our method can estimate the outlier explicitly with less bias but higher variance. To solve the problem, a joint optimization function is formulated and can be effectively solved through the inexact Augmented Lagrange Multiplier (ALM) method. We evaluate our method along with several state-of-the-art approaches in MATLAB. Both qualitative and quantitative results demonstrate that our proposed method works effectively on a large range of complex scenarios.

  9. Do active design buildings change health behaviour and workplace perceptions?

    PubMed

    Engelen, L; Dhillon, H M; Chau, J Y; Hespe, D; Bauman, A E

    2016-07-01

    Occupying new, active design office buildings designed for health promotion and connectivity provides an opportunity to evaluate indoor environment effects on healthy behaviour, sedentariness and workplace perceptions. To determine if moving to a health-promoting building changed workplace physical activity, sedentary behaviour, workplace perceptions and productivity. Participants from four locations at the University of Sydney, Australia, relocated into a new active design building. After consent, participants completed an online questionnaire 2 months before moving and 2 months after. Questions related to health behaviours (physical activity and sitting time), musculoskeletal issues, perceptions of the office environment, productivity and engagement. There were 34 participants (60% aged 25-45, 78% female, 84% employed full-time); 21 participants provided complete data. Results showed that after the move participants spent less work time sitting (83-70%; P < 0.01) and more time standing (9-21%; P < 0.01), while walking time remained unchanged. Participants reported less low back pain (P < 0.01). Sixty per cent of participants in the new workplace were in an open-plan office, compared to 16% before moving. Participants perceived the new work environment as more stimulating, better lit and ventilated, but noisier and providing less storage. No difference was reported in daily physical activity, number of stairs climbed or productivity. Moving to an active design building appeared to have physical health-promoting effects on workers, but workers' perceptions about the new work environment varied. These results will inform future studies in other new buildings. © The Author 2016. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Gene-Environment Interplay, Family Relationships, and Child Adjustment

    ERIC Educational Resources Information Center

    Horwitz, Briana N.; Neiderhiser, Jenae M.

    2011-01-01

    This paper reviews behavioral genetic research from the past decade that has moved beyond simply studying the independent influences of genes and environments. The studies considered in this review have instead focused on understanding gene-environment interplay, including genotype-environment correlation (rGE) and genotype x environment…

  11. General principles in motion vision: color blindness of object motion depends on pattern velocity in honeybee and goldfish.

    PubMed

    Stojcev, Maja; Radtke, Nils; D'Amaro, Daniele; Dyer, Adrian G; Neumeyer, Christa

    2011-07-01

    Visual systems can undergo striking adaptations to specific visual environments during evolution, but they can also be very "conservative." This seems to be the case in motion vision, which is surprisingly similar in species as distant as honeybee and goldfish. In both visual systems, motion vision measured with the optomotor response is color blind and mediated by one photoreceptor type only. Here, we ask whether this is also the case if the moving stimulus is restricted to a small part of the visual field, and test what influence velocity may have on chromatic motion perception. Honeybees were trained to discriminate between clockwise- and counterclockwise-rotating sector disks. Six types of disk stimuli differing in green receptor contrast were tested using three different rotational velocities. When green receptor contrast was at a minimum, bees were able to discriminate rotation directions with all colored disks at slow velocities of 6 and 12 Hz contrast frequency but not with a relatively high velocity of 24 Hz. In the goldfish experiment, the animals were trained to detect a moving red or blue disk presented in a green surround. Discrimination ability between this stimulus and a homogenous green background was poor when the M-cone type was not or only slightly modulated considering high stimulus velocity (7 cm/s). However, discrimination was improved with slower stimulus velocities (4 and 2 cm/s). These behavioral results indicate that there is potentially an object motion system in both honeybee and goldfish, which is able to incorporate color information at relatively low velocities but is color blind with higher speed. We thus propose that both honeybees and goldfish have multiple subsystems of object motion, which include achromatic as well as chromatic processing.

  12. Objective Assessment of Activity Limitation in Glaucoma with Smartphone Virtual Reality Goggles: A Pilot Study.

    PubMed

    Goh, Rachel L Z; Kong, Yu Xiang George; McAlinden, Colm; Liu, John; Crowston, Jonathan G; Skalicky, Simon E

    2018-01-01

    To evaluate the use of smartphone-based virtual reality to objectively assess activity limitation in glaucoma. Cross-sectional study of 93 patients (54 mild, 22 moderate, 17 severe glaucoma). Sociodemographics, visual parameters, Glaucoma Activity Limitation-9 and Visual Function Questionnaire - Utility Index (VFQ-UI) were collected. Mean age was 67.4 ± 13.2 years; 52.7% were male; 65.6% were driving. A smartphone placed inside virtual reality goggles was used to administer the Virtual Reality Glaucoma Visual Function Test (VR-GVFT) to participants, consisting of three parts: stationary, moving ball, driving. Rasch analysis and classical validity tests were conducted to assess performance of VR-GVFT. Twenty-four of 28 stationary test items showed acceptable fit to the Rasch model (person separation 3.02, targeting 0). Eleven of 12 moving ball test items showed acceptable fit (person separation 3.05, targeting 0). No driving test items showed acceptable fit. Stationary test person scores showed good criterion validity, differentiating between glaucoma severity groups ( P = 0.014); modest convergence validity, with mild to moderate correlation with VFQ-UI, better eye (BE) mean deviation, BE pattern deviation, BE central scotoma, worse eye (WE) visual acuity, and contrast sensitivity (CS) in both eyes ( R = 0.243-0.381); and suboptimal divergent validity. Multivariate analysis showed that lower WE CS ( P = 0.044) and greater age ( P = 0.009) were associated with worse stationary test person scores. Smartphone-based virtual reality may be a portable objective simulation test of activity limitation related to glaucomatous visual loss. The use of simulated virtual environments could help better understand the activity limitations that affect patients with glaucoma.

  13. Objective Assessment of Activity Limitation in Glaucoma with Smartphone Virtual Reality Goggles: A Pilot Study

    PubMed Central

    Goh, Rachel L. Z.; McAlinden, Colm; Liu, John; Crowston, Jonathan G.; Skalicky, Simon E.

    2018-01-01

    Purpose To evaluate the use of smartphone-based virtual reality to objectively assess activity limitation in glaucoma. Methods Cross-sectional study of 93 patients (54 mild, 22 moderate, 17 severe glaucoma). Sociodemographics, visual parameters, Glaucoma Activity Limitation-9 and Visual Function Questionnaire – Utility Index (VFQ-UI) were collected. Mean age was 67.4 ± 13.2 years; 52.7% were male; 65.6% were driving. A smartphone placed inside virtual reality goggles was used to administer the Virtual Reality Glaucoma Visual Function Test (VR-GVFT) to participants, consisting of three parts: stationary, moving ball, driving. Rasch analysis and classical validity tests were conducted to assess performance of VR-GVFT. Results Twenty-four of 28 stationary test items showed acceptable fit to the Rasch model (person separation 3.02, targeting 0). Eleven of 12 moving ball test items showed acceptable fit (person separation 3.05, targeting 0). No driving test items showed acceptable fit. Stationary test person scores showed good criterion validity, differentiating between glaucoma severity groups (P = 0.014); modest convergence validity, with mild to moderate correlation with VFQ-UI, better eye (BE) mean deviation, BE pattern deviation, BE central scotoma, worse eye (WE) visual acuity, and contrast sensitivity (CS) in both eyes (R = 0.243–0.381); and suboptimal divergent validity. Multivariate analysis showed that lower WE CS (P = 0.044) and greater age (P = 0.009) were associated with worse stationary test person scores. Conclusions Smartphone-based virtual reality may be a portable objective simulation test of activity limitation related to glaucomatous visual loss. Translational Relevance The use of simulated virtual environments could help better understand the activity limitations that affect patients with glaucoma. PMID:29372112

  14. Active tactile sampling by an insect in a step-climbing paradigm

    PubMed Central

    Krause, André F.; Dürr, Volker

    2012-01-01

    Many insects actively explore their near-range environment with their antennae. Stick insects (Carausius morosus) rhythmically move their antennae during walking and respond to antennal touch by repetitive tactile sampling of the object. Despite its relevance for spatial orientation, neither the spatial sampling patterns nor the kinematics of antennation behavior in insects are understood. Here we investigate unrestrained bilateral sampling movements during climbing of steps. The main objectives are: (1) How does the antennal contact pattern relate to particular object features? (2) How are the antennal joints coordinated during bilateral tactile sampling? We conducted motion capture experiments on freely climbing insects, using steps of different height. Tactile sampling was analyzed at the level of antennal joint angles. Moreover, we analyzed contact patterns on the surfaces of both the obstacle and the antenna itself. Before the first contact, both antennae move in a broad, mostly elliptical exploratory pattern. After touching the obstacle, the pattern switches to a narrower and faster movement, caused by higher cycle frequencies and lower cycle amplitudes in all joints. Contact events were divided into wall- and edge-contacts. Wall contacts occurred mostly with the distal third of the flagellum, which is flexible, whereas edge contacts often occurred proximally, where the flagellum is stiff. The movement of both antennae was found to be coordinated, exhibiting bilateral coupling of functionally analogous joints [e.g., left head-scape (HS) joint with right scape-pedicel (SP) joint] throughout tactile sampling. In comparison, bilateral coupling between homologous joints (e.g., both HS joints) was significantly weaker. Moreover, inter-joint coupling was significantly weaker during the contact episode than before. In summary, stick insects show contact-induced changes in frequency, amplitude and inter-joint coordination during tactile sampling of climbed obstacles. PMID:22754513

  15. Three-dimensional local ALE-FEM method for fluid flow in domains containing moving boundaries/objects interfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrington, David Bradley; Monayem, A. K. M.; Mazumder, H.

    2015-03-05

    A three-dimensional finite element method for the numerical simulations of fluid flow in domains containing moving rigid objects or boundaries is developed. The method falls into the general category of Arbitrary Lagrangian Eulerian methods; it is based on a fixed mesh that is locally adapted in the immediate vicinity of the moving interfaces and reverts to its original shape once the moving interfaces go past the elements. The moving interfaces are defined by separate sets of marker points so that the global mesh is independent of interface movement and the possibility of mesh entanglement is eliminated. The results is amore » fully robust formulation capable of calculating on domains of complex geometry with moving boundaries or devises that can also have a complex geometry without danger of the mesh becoming unsuitable due to its continuous deformation thus eliminating the need for repeated re-meshing and interpolation. Moreover, the boundary conditions on the interfaces are imposed exactly. This work is intended to support the internal combustion engines simulator KIVA developed at Los Alamos National Laboratories. The model's capabilities are illustrated through application to incompressible flows in different geometrical settings that show the robustness and flexibility of the technique to perform simulations involving moving boundaries in a three-dimensional domain.« less

  16. A Novel Method of Localization for Moving Objects with an Alternating Magnetic Field

    PubMed Central

    Gao, Xiang; Yan, Shenggang; Li, Bin

    2017-01-01

    Magnetic detection technology has wide applications in the fields of geological exploration, biomedical treatment, wreck removal and localization of unexploded ordinance. A large number of methods have been developed to locate targets with static magnetic fields, however, the relation between the problem of localization of moving objectives with alternating magnetic fields and the localization with a static magnetic field is rarely studied. A novel method of target localization based on coherent demodulation was proposed in this paper. The problem of localization of moving objects with an alternating magnetic field was transformed into the localization with a static magnetic field. The Levenberg-Marquardt (L-M) algorithm was applied to calculate the position of the target with magnetic field data measured by a single three-component magnetic sensor. Theoretical simulation and experimental results demonstrate the effectiveness of the proposed method. PMID:28430153

  17. Moving shadows contribute to the corridor illusion in a chimpanzee (Pan troglodytes).

    PubMed

    Imura, Tomoko; Tomonaga, Masaki

    2009-08-01

    Previous studies have reported that backgrounds depicting linear perspective and texture gradients influence relative size discrimination in nonhuman animals (known as the "corridor illusion"), but research has not yet identified the other kinds of depth cues contributing to the corridor illusion. This study examined the effects of linear perspective and shadows on the responses of a chimpanzee (Pan troglodytes) to the corridor illusion. The performance of the chimpanzee was worse when a smaller object was presented at the farther position on a background reflecting a linear perspective, implying that the corridor illusion was replicated in the chimpanzee (Imura, Tomonaga, & Yagi, 2008). The extent of the illusion changed as a function of the position of the shadows cast by the objects only when the shadows were moving in synchrony with the objects. These findings suggest that moving shadows and linear perspective contributed to the corridor illusion in a chimpanzee. Copyright 2009 APA, all rights reserved.

  18. Linkage of additional contents to moving objects and video shots in a generic media framework for interactive television

    NASA Astrophysics Data System (ADS)

    Lopez, Alejandro; Noe, Miquel; Fernandez, Gabriel

    2004-10-01

    The GMF4iTV project (Generic Media Framework for Interactive Television) is an IST European project that consists of an end-to-end broadcasting platform providing interactivity on heterogeneous multimedia devices such as Set-Top-Boxes and PCs according to the Multimedia Home Platform (MHP) standard from DVB. This platform allows the content providers to create enhanced audiovisual contents with a degree of interactivity at moving object level or shot change from a video. The end user is then able to interact with moving objects from the video or individual shots allowing the enjoyment of additional contents associated to them (MHP applications, HTML pages, JPEG, MPEG4 files...). This paper focus the attention to the issues related to metadata and content transmission, synchronization, signaling and bitrate allocation of the GMF4iTV project.

  19. Eye tracking a self-moved target with complex hand-target dynamics

    PubMed Central

    Landelle, Caroline; Montagnini, Anna; Madelain, Laurent

    2016-01-01

    Previous work has shown that the ability to track with the eye a moving target is substantially improved when the target is self-moved by the subject's hand compared with when being externally moved. Here, we explored a situation in which the mapping between hand movement and target motion was perturbed by simulating an elastic relationship between the hand and target. Our objective was to determine whether the predictive mechanisms driving eye-hand coordination could be updated to accommodate this complex hand-target dynamics. To fully appreciate the behavioral effects of this perturbation, we compared eye tracking performance when self-moving a target with a rigid mapping (simple) and a spring mapping as well as when the subject tracked target trajectories that he/she had previously generated when using the rigid or spring mapping. Concerning the rigid mapping, our results confirmed that smooth pursuit was more accurate when the target was self-moved than externally moved. In contrast, with the spring mapping, eye tracking had initially similar low spatial accuracy (though shorter temporal lag) in the self versus externally moved conditions. However, within ∼5 min of practice, smooth pursuit improved in the self-moved spring condition, up to a level similar to the self-moved rigid condition. Subsequently, when the mapping unexpectedly switched from spring to rigid, the eye initially followed the expected target trajectory and not the real one, thereby suggesting that subjects used an internal representation of the new hand-target dynamics. Overall, these results emphasize the stunning adaptability of smooth pursuit when self-maneuvering objects with complex dynamics. PMID:27466129

  20. 77 FR 58172 - Proposed Renewal of Existing Information Collection; Records of Preshift and Onshift Inspections...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-19

    ... conditions are identified, thereby ensuring a safe working environment for the slope and shaft sinking... environment at any time. The working environment is typically a confined area in close proximity to moving...

  1. Distributed proximity sensor system having embedded light emitters and detectors

    NASA Technical Reports Server (NTRS)

    Lee, Sukhan (Inventor)

    1990-01-01

    A distributed proximity sensor system is provided with multiple photosensitive devices and light emitters embedded on the surface of a robot hand or other moving member in a geometric pattern. By distributing sensors and emitters capable of detecting distances and angles to points on the surface of an object from known points in the geometric pattern, information is obtained for achieving noncontacting shape and distance perception, i.e., for automatic determination of the object's shape, direction and distance, as well as the orientation of the object relative to the robot hand or other moving member.

  2. The Palomar Transient Factory: High Quality Realtime Data Processing in a Cost-Constrained Environment

    NASA Astrophysics Data System (ADS)

    Surace, J.; Laher, R.; Masci, F.; Grillmair, C.; Helou, G.

    2015-09-01

    The Palomar Transient Factory (PTF) is a synoptic sky survey in operation since 2009. PTF utilizes a 7.1 square degree camera on the Palomar 48-inch Schmidt telescope to survey the sky primarily at a single wavelength (R-band) at a rate of 1000-3000 square degrees a night. The data are used to detect and study transient and moving objects such as gamma ray bursts, supernovae and asteroids, as well as variable phenomena such as quasars and Galactic stars. The data processing system at IPAC handles realtime processing and detection of transients, solar system object processing, high photometric precision processing and light curve generation, and long-term archiving and curation. This was developed under an extremely limited budget profile in an unusually agile development environment. Here we discuss the mechanics of this system and our overall development approach. Although a significant scientific installation in of itself, PTF also serves as the prototype for our next generation project, the Zwicky Transient Facility (ZTF). Beginning operations in 2017, ZTF will feature a 50 square degree camera which will enable scanning of the entire northern visible sky every night. ZTF in turn will serve as a stepping stone to the Large Synoptic Survey Telescope (LSST), a major NSF facility scheduled to begin operations in the early 2020s.

  3. Direct Manipulation in Virtual Reality

    NASA Technical Reports Server (NTRS)

    Bryson, Steve

    2003-01-01

    Virtual Reality interfaces offer several advantages for scientific visualization such as the ability to perceive three-dimensional data structures in a natural way. The focus of this chapter is direct manipulation, the ability for a user in virtual reality to control objects in the virtual environment in a direct and natural way, much as objects are manipulated in the real world. Direct manipulation provides many advantages for the exploration of complex, multi-dimensional data sets, by allowing the investigator the ability to intuitively explore the data environment. Because direct manipulation is essentially a control interface, it is better suited for the exploration and analysis of a data set than for the publishing or communication of features found in that data set. Thus direct manipulation is most relevant to the analysis of complex data that fills a volume of three-dimensional space, such as a fluid flow data set. Direct manipulation allows the intuitive exploration of that data, which facilitates the discovery of data features that would be difficult to find using more conventional visualization methods. Using a direct manipulation interface in virtual reality, an investigator can, for example, move a data probe about in space, watching the results and getting a sense of how the data varies within its spatial volume.

  4. Security Implications of OPC, OLE, DCOM, and RPC in Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2006-01-01

    OPC is a collection of software programming standards and interfaces used in the process control industry. It is intended to provide open connectivity and vendor equipment interoperability. The use of OPC technology simplifies the development of control systems that integrate components from multiple vendors and support multiple control protocols. OPC-compliant products are available from most control system vendors, and are widely used in the process control industry. OPC was originally known as OLE for Process Control; the first standards for OPC were based on underlying services in the Microsoft Windows computing environment. These underlying services (OLE [Object Linking and Embedding],more » DCOM [Distributed Component Object Model], and RPC [Remote Procedure Call]) have been the source of many severe security vulnerabilities. It is not feasible to automatically apply vendor patches and service packs to mitigate these vulnerabilities in a control systems environment. Control systems using the original OPC data access technology can thus inherit the vulnerabilities associated with these services. Current OPC standardization efforts are moving away from the original focus on Microsoft protocols, with a distinct trend toward web-based protocols that are independent of any particular operating system. However, the installed base of OPC equipment consists mainly of legacy implementations of the OLE for Process Control protocols.« less

  5. Who's Got the Bridge? - Towards Safe, Robust Autonomous Operations at NASA Langley's Autonomy Incubator

    NASA Technical Reports Server (NTRS)

    Allen, B. Danette; Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Crisp, Vicki K.

    2015-01-01

    NASA aeronautics research has made decades of contributions to aviation. Both aircraft and air traffic management (ATM) systems in use today contain NASA-developed and NASA sponsored technologies that improve safety and efficiency. Recent innovations in robotics and autonomy for automobiles and unmanned systems point to a future with increased personal mobility and access to transportation, including aviation. Automation and autonomous operations will transform the way we move people and goods. Achieving this mobility will require safe, robust, reliable operations for both the vehicle and the airspace and challenges to this inevitable future are being addressed now in government labs, universities, and industry. These challenges are the focus of NASA Langley Research Center's Autonomy Incubator whose R&D portfolio includes mission planning, trajectory and path planning, object detection and avoidance, object classification, sensor fusion, controls, machine learning, computer vision, human-machine teaming, geo-containment, open architecture design and development, as well as the test and evaluation environment that will be critical to prove system reliability and support certification. Safe autonomous operations will be enabled via onboard sensing and perception systems in both data-rich and data-deprived environments. Applied autonomy will enable safety, efficiency and unprecedented mobility as people and goods take to the skies tomorrow just as we do on the road today.

  6. Magnetic levitation system for moving objects

    DOEpatents

    Post, R.F.

    1998-03-03

    Repelling magnetic forces are produced by the interaction of a flux-concentrated magnetic field (produced by permanent magnets or electromagnets) with an inductively loaded closed electric circuit. When one such element moves with respect to the other, a current is induced in the circuit. This current then interacts back on the field to produce a repelling force. These repelling magnetic forces are applied to magnetically levitate a moving object such as a train car. The power required to levitate a train of such cars is drawn from the motional energy of the train itself, and typically represents only a percent or two of the several megawatts of power required to overcome aerodynamic drag at high speeds. 7 figs.

  7. Magnetic levitation system for moving objects

    DOEpatents

    Post, Richard F.

    1998-01-01

    Repelling magnetic forces are produced by the interaction of a flux-concentrated magnetic field (produced by permanent magnets or electromagnets) with an inductively loaded closed electric circuit. When one such element moves with respect to the other, a current is induced in the circuit. This current then interacts back on the field to produce a repelling force. These repelling magnetic forces are applied to magnetically levitate a moving object such as a train car. The power required to levitate a train of such cars is drawn from the motional energy of the train itself, and typically represents only a percent or two of the several megawatts of power required to overcome aerodynamic drag at high speeds.

  8. Rhetorical Moves in Problem Statement Section of Iranian EFL Postgraduate Students' Theses

    ERIC Educational Resources Information Center

    Nimehchisalem, Vahid; Tarvirdizadeh, Zahra; Paidary, Sara Sayed; Binti Mat Hussin, Nur Izyan Syamimi

    2016-01-01

    The Problem Statement (PS) section of a thesis, usually a subsection of the first chapter, is supposed to justify the objectives of the study. Postgraduate students are often ignorant of the rhetorical moves that they are expected to make in their PS. This descriptive study aimed to explore the rhetorical moves of the PS in Iranian master's (MA)…

  9. Monitoring Moving Queries inside a Safe Region

    PubMed Central

    Al-Khalidi, Haidar; Taniar, David; Alamri, Sultan

    2014-01-01

    With mobile moving range queries, there is a need to recalculate the relevant surrounding objects of interest whenever the query moves. Therefore, monitoring the moving query is very costly. The safe region is one method that has been proposed to minimise the communication and computation cost of continuously monitoring a moving range query. Inside the safe region the set of objects of interest to the query do not change; thus there is no need to update the query while it is inside its safe region. However, when the query leaves its safe region the mobile device has to reevaluate the query, necessitating communication with the server. Knowing when and where the mobile device will leave a safe region is widely known as a difficult problem. To solve this problem, we propose a novel method to monitor the position of the query over time using a linear function based on the direction of the query obtained by periodic monitoring of its position. Periodic monitoring ensures that the query is aware of its location all the time. This method reduces the costs associated with communications in client-server architecture. Computational results show that our method is successful in handling moving query patterns. PMID:24696652

  10. Psychophysiological Studies in Extreme Environments

    NASA Technical Reports Server (NTRS)

    Toscano, William B.

    2011-01-01

    This paper reviews the results from two studies that employed the methodology of multiple converging indicators (physiological measures, subjective self-reports and performance metrics) to examine individual differences in the ability of humans to adapt and function in high stress environments. The first study was a joint collaboration between researchers at the US Army Research Laboratory (ARL) and NASA Ames Research Center. Twenty-four men and women active duty soldiers volunteered as participants. Field tests were conducted in the Command and Control Vehicle (C2V), an enclosed armored vehicle, designed to support both stationary and on-the-move operations. This vehicle contains four computer workstations where crew members are expected to perform command decisions in the field under combat conditions. The study objectives were: 1) to determine the incidence of motion sickness in the C2V relative to interior seat orientation/position, and parked, moving and short-haul test conditions; and 2) to determine the impact of the above conditions on cognitive performance, mood, and physiology. Data collected during field tests included heart rate, respiration rate, skin temperature, and skin conductance, self-reports of mood and symptoms, and cognitive performance metrics that included seven subtests in the DELTA performance test battery. Results showed that during 4-hour operational tests over varied terrain motion sickness symptoms increased; performance degraded by at least 5 percent; and physiological response profiles of individuals were categorized based on good and poor cognitive performance. No differences were observed relative to seating orientation or position.

  11. Nearly automatic motion capture system for tracking octopus arm movements in 3D space.

    PubMed

    Zelman, Ido; Galun, Meirav; Akselrod-Ballin, Ayelet; Yekutieli, Yoram; Hochner, Binyamin; Flash, Tamar

    2009-08-30

    Tracking animal movements in 3D space is an essential part of many biomechanical studies. The most popular technique for human motion capture uses markers placed on the skin which are tracked by a dedicated system. However, this technique may be inadequate for tracking animal movements, especially when it is impossible to attach markers to the animal's body either because of its size or shape or because of the environment in which the animal performs its movements. Attaching markers to an animal's body may also alter its behavior. Here we present a nearly automatic markerless motion capture system that overcomes these problems and successfully tracks octopus arm movements in 3D space. The system is based on three successive tracking and processing stages. The first stage uses a recently presented segmentation algorithm to detect the movement in a pair of video sequences recorded by two calibrated cameras. In the second stage, the results of the first stage are processed to produce 2D skeletal representations of the moving arm. Finally, the 2D skeletons are used to reconstruct the octopus arm movement as a sequence of 3D curves varying in time. Motion tracking, segmentation and reconstruction are especially difficult problems in the case of octopus arm movements because of the deformable, non-rigid structure of the octopus arm and the underwater environment in which it moves. Our successful results suggest that the motion-tracking system presented here may be used for tracking other elongated objects.

  12. Health and well-being of movers in rural and urban areas--a grid-based analysis of northern Finland birth cohort 1966.

    PubMed

    Lankila, Tiina; Näyhä, Simo; Rautio, Arja; Koiranen, Markku; Rusanen, Jarmo; Taanila, Anja

    2013-01-01

    We examined the association of health and well-being with moving using a detailed geographical scale. 7845 men and women born in northern Finland in 1966 were surveyed by postal questionnaire in 1997 and linked to 1 km(2) geographical grids based on each subject's home address in 1997-2000. Population density was used to classify each grid as rural (1-100 inhabitants/km²) or urban (>100 inhabitants/km²) type. Moving was treated as a three-class response variate (not moved; moved to different type of grid; moved to similar type of grid). Moving was regressed on five explanatory factors (life satisfaction, self-reported health, lifetime morbidity, activity-limiting illness and use of health services), adjusting for factors potentially associated with health and moving (gender, marital status, having children, housing tenure, education, employment status and previous move). The results were expressed as odds ratios (OR) and their 95% confidence intervals (CI). Moves from rural to urban grids were associated with dissatisfaction with current life (adjusted OR 2.01; 95% CI 1.26-3.22) and having somatic (OR 1.66; 1.07-2.59) or psychiatric (OR 2.37; 1.21-4.63) morbidities, the corresponding ORs for moves from rural to other rural grids being 1.71 (0.98-2.98), 1.63 (0.95-2.78) and 2.09 (0.93-4.70), respectively. Among urban dwellers, only the frequent use of health services (≥ 21 times/year) was associated with moving, the adjusted ORs being 1.65 (1.05-2.57) for moves from urban to rural grids and 1.30 (1.03-1.64) for urban to other urban grids. We conclude that dissatisfaction with life and history of diseases and injuries, especially psychiatric morbidity, may increase the propensity to move from rural to urbanised environments, while availability of health services may contribute to moves within urban areas and also to moves from urban areas to the countryside, where high-level health services enable a good quality of life for those attracted by the pastoral environment. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. SU-G-BRA-14: Dose in a Rigidly Moving Phantom with Jaw and MLC Compensation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, E; Lucas, D

    Purpose: To validate dose calculation for a rigidly moving object with jaw motion and MLC shifts to compensate for the motion in a TomoTherapy™ treatment delivery. Methods: An off-line version of the TomoTherapy dose calculator was extended to perform dose calculations for rigidly moving objects. A variety of motion traces were added to treatment delivery plans, along with corresponding jaw compensation and MLC shift compensation profiles. Jaw compensation profiles were calculated by shifting the jaws such that the center of the treatment beam moved by an amount equal to the motion in the longitudinal direction. Similarly, MLC compensation profiles weremore » calculated by shifting the MLC leaves by an amount that most closely matched the motion in the transverse direction. The same jaw and MLC compensation profiles were used during simulated treatment deliveries on a TomoTherapy system, and film measurements were obtained in a rigidly moving phantom. Results: The off-line TomoTherapy dose calculator accurately predicted dose profiles for a rigidly moving phantom along with jaw motion and MLC shifts to compensate for the motion. Calculations matched film measurements to within 2%/1 mm. Jaw and MLC compensation substantially reduced the discrepancy between the delivered dose distribution and the calculated dose with no motion. For axial motion, the compensated dose matched the no-motion dose within 2%/1mm. For transverse motion, the dose matched within 2%/3mm (approximately half the width of an MLC leaf). Conclusion: The off-line TomoTherapy dose calculator accurately computes dose delivered to a rigidly moving object, and accurately models the impact of moving the jaws and shifting the MLC leaf patterns to compensate for the motion. Jaw tracking and MLC leaf shifting can effectively compensate for the dosimetric impact of motion during a TomoTherapy treatment delivery.« less

  14. Activation of the Human MT Complex by Motion in Depth Induced by a Moving Cast Shadow

    PubMed Central

    Katsuyama, Narumi; Usui, Nobuo; Taira, Masato

    2016-01-01

    A moving cast shadow is a powerful monocular depth cue for motion perception in depth. For example, when a cast shadow moves away from or toward an object in a two-dimensional plane, the object appears to move toward or away from the observer in depth, respectively, whereas the size and position of the object are constant. Although the cortical mechanisms underlying motion perception in depth by cast shadow are unknown, the human MT complex (hMT+) is likely involved in the process, as it is sensitive to motion in depth represented by binocular depth cues. In the present study, we examined this possibility by using a functional magnetic resonance imaging (fMRI) technique. First, we identified the cortical regions sensitive to the motion of a square in depth represented via binocular disparity. Consistent with previous studies, we observed significant activation in the bilateral hMT+, and defined functional regions of interest (ROIs) there. We then investigated the activity of the ROIs during observation of the following stimuli: 1) a central square that appeared to move back and forth via a moving cast shadow (mCS); 2) a segmented and scrambled cast shadow presented beside the square (sCS); and 3) no cast shadow (nCS). Participants perceived motion of the square in depth in the mCS condition only. The activity of the hMT+ was significantly higher in the mCS compared with the sCS and nCS conditions. Moreover, the hMT+ was activated equally in both hemispheres in the mCS condition, despite presentation of the cast shadow in the bottom-right quadrant of the stimulus. Perception of the square moving in depth across visual hemifields may be reflected in the bilateral activation of the hMT+. We concluded that the hMT+ is involved in motion perception in depth induced by moving cast shadow and by binocular disparity. PMID:27597999

  15. Diffusion of Radionuclides in Concrete and Soil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mattigod, Shas V.; Wellman, Dawn M.; Bovaird, Chase C.

    2012-04-25

    One of the methods being considered for safely disposing of Category 3 low-level radioactive wastes is to encase the waste in concrete. Such concrete encasement would contain and isolate the waste packages from the hydrologic environment and would act as an intrusion barrier. Any failure of concrete encasement may result in water intrusion and consequent mobilization of radionuclides from the waste packages. The mobilized radionuclides may escape from the encased concrete by mass flow and/or diffusion and move into the surrounding subsurface environment. Therefore, it is necessary to assess the performance of the concrete encasement structure and the ability ofmore » the surrounding soil to retard radionuclide migration. The objective of our study was to measure the diffusivity of Re, Tc and I in concrete containment and the surrounding vadose zone soil. Effects of carbonation, presence of metallic iron, and fracturing of concrete and the varying moisture contents in soil on the diffusivities of Tc and I were evaluated.« less

  16. Using animation quality metric to improve efficiency of global illumination computation for dynamic environments

    NASA Astrophysics Data System (ADS)

    Myszkowski, Karol; Tawara, Takehiro; Seidel, Hans-Peter

    2002-06-01

    In this paper, we consider applications of perception-based video quality metrics to improve the performance of global lighting computations for dynamic environments. For this purpose we extend the Visible Difference Predictor (VDP) developed by Daly to handle computer animations. We incorporate into the VDP the spatio-velocity CSF model developed by Kelly. The CSF model requires data on the velocity of moving patterns across the image plane. We use the 3D image warping technique to compensate for the camera motion, and we conservatively assume that the motion of animated objects (usually strong attractors of the visual attention) is fully compensated by the smooth pursuit eye motion. Our global illumination solution is based on stochastic photon tracing and takes advantage of temporal coherence of lighting distribution, by processing photons both in the spatial and temporal domains. The VDP is used to keep noise inherent in stochastic methods below the sensitivity level of the human observer. As a result a perceptually-consistent quality across all animation frames is obtained.

  17. Moving Another Big Desk.

    ERIC Educational Resources Information Center

    Fawcett, Gay

    1996-01-01

    New ways of thinking about leadership require that leaders move their big desks and establish environments that encourage trust and open communication. Educational leaders must trust their colleagues to make wise choices. When teachers are treated democratically as leaders, classrooms will also become democratic learning organizations. (SM)

  18. Moving Object Detection in Heterogeneous Conditions in Embedded Systems.

    PubMed

    Garbo, Alessandro; Quer, Stefano

    2017-07-01

    This paper presents a system for moving object exposure, focusing on pedestrian detection, in external, unfriendly, and heterogeneous environments. The system manipulates and accurately merges information coming from subsequent video frames, making small computational efforts in each single frame. Its main characterizing feature is to combine several well-known movement detection and tracking techniques, and to orchestrate them in a smart way to obtain good results in diversified scenarios. It uses dynamically adjusted thresholds to characterize different regions of interest, and it also adopts techniques to efficiently track movements, and detect and correct false positives. Accuracy and reliability mainly depend on the overall receipt, i.e., on how the software system is designed and implemented, on how the different algorithmic phases communicate information and collaborate with each other, and on how concurrency is organized. The application is specifically designed to work with inexpensive hardware devices, such as off-the-shelf video cameras and small embedded computational units, eventually forming an intelligent urban grid. As a matter of fact, the major contribution of the paper is the presentation of a tool for real-time applications in embedded devices with finite computational (time and memory) resources. We run experimental results on several video sequences (both home-made and publicly available), showing the robustness and accuracy of the overall detection strategy. Comparisons with state-of-the-art strategies show that our application has similar tracking accuracy but much higher frame-per-second rates.

  19. Multimodal control of sensors on multiple simulated unmanned vehicles.

    PubMed

    Baber, C; Morin, C; Parekh, M; Cahillane, M; Houghton, R J

    2011-09-01

    The use of multimodal (speech plus manual) control of the sensors on combinations of one, two, three or five simulated unmanned vehicles (UVs) is explored. Novice controllers of simulated UVs complete a series of target checking tasks. Two experiments compare speech and gamepad control for one, two, three or five UVs in a simulated environment. Increasing the number of UVs has an impact on subjective rating of workload (measured by NASA-Task Load Index), particularly when moving from one to three UVs. Objective measures of performance showed that the participants tended to issue fewer commands as the number of vehicles increased (when using the gamepad control), but, while performance with a single UV was superior to that of multiple UVs, there was little difference across two, three or five UVs. Participants with low spatial ability (measured by the Object Perspectives Test) showed an increase in time to respond to warnings when controlling five UVs. Combining speech with gamepad control of sensors on UVs leads to superior performance on a secondary (respond-to-warnings) task (implying a reduction in demand) and use of fewer commands on primary (move-sensors and classify-target) tasks (implying more efficient operation). STATEMENT OF RELEVANCE: Benefits of multimodal control for unmanned vehicles are demonstrated. When controlling sensors on multiple UVs, participants with low spatial orientation scores have problems. It is proposed that the findings of these studies have implications for selection of UV operators and suggests that future UV workstations could benefit from multimodal control.

  20. Moving Object Detection in Heterogeneous Conditions in Embedded Systems

    PubMed Central

    Garbo, Alessandro

    2017-01-01

    This paper presents a system for moving object exposure, focusing on pedestrian detection, in external, unfriendly, and heterogeneous environments. The system manipulates and accurately merges information coming from subsequent video frames, making small computational efforts in each single frame. Its main characterizing feature is to combine several well-known movement detection and tracking techniques, and to orchestrate them in a smart way to obtain good results in diversified scenarios. It uses dynamically adjusted thresholds to characterize different regions of interest, and it also adopts techniques to efficiently track movements, and detect and correct false positives. Accuracy and reliability mainly depend on the overall receipt, i.e., on how the software system is designed and implemented, on how the different algorithmic phases communicate information and collaborate with each other, and on how concurrency is organized. The application is specifically designed to work with inexpensive hardware devices, such as off-the-shelf video cameras and small embedded computational units, eventually forming an intelligent urban grid. As a matter of fact, the major contribution of the paper is the presentation of a tool for real-time applications in embedded devices with finite computational (time and memory) resources. We run experimental results on several video sequences (both home-made and publicly available), showing the robustness and accuracy of the overall detection strategy. Comparisons with state-of-the-art strategies show that our application has similar tracking accuracy but much higher frame-per-second rates. PMID:28671582

Top