A comparison of moving object detection methods for real-time moving object detection
NASA Astrophysics Data System (ADS)
Roshan, Aditya; Zhang, Yun
2014-06-01
Moving object detection has a wide variety of applications from traffic monitoring, site monitoring, automatic theft identification, face detection to military surveillance. Many methods have been developed across the globe for moving object detection, but it is very difficult to find one which can work globally in all situations and with different types of videos. The purpose of this paper is to evaluate existing moving object detection methods which can be implemented in software on a desktop or laptop, for real time object detection. There are several moving object detection methods noted in the literature, but few of them are suitable for real time moving object detection. Most of the methods which provide for real time movement are further limited by the number of objects and the scene complexity. This paper evaluates the four most commonly used moving object detection methods as background subtraction technique, Gaussian mixture model, wavelet based and optical flow based methods. The work is based on evaluation of these four moving object detection methods using two (2) different sets of cameras and two (2) different scenes. The moving object detection methods have been implemented using MatLab and results are compared based on completeness of detected objects, noise, light change sensitivity, processing time etc. After comparison, it is observed that optical flow based method took least processing time and successfully detected boundary of moving objects which also implies that it can be implemented for real-time moving object detection.
Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo
2015-01-01
A content-matched (CM) range monitoring query over moving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CM range monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods. PMID:26393613
Monitoring Moving Queries inside a Safe Region
Al-Khalidi, Haidar; Taniar, David; Alamri, Sultan
2014-01-01
With mobile moving range queries, there is a need to recalculate the relevant surrounding objects of interest whenever the query moves. Therefore, monitoring the moving query is very costly. The safe region is one method that has been proposed to minimise the communication and computation cost of continuously monitoring a moving range query. Inside the safe region the set of objects of interest to the query do not change; thus there is no need to update the query while it is inside its safe region. However, when the query leaves its safe region the mobile device has to reevaluate the query, necessitating communication with the server. Knowing when and where the mobile device will leave a safe region is widely known as a difficult problem. To solve this problem, we propose a novel method to monitor the position of the query over time using a linear function based on the direction of the query obtained by periodic monitoring of its position. Periodic monitoring ensures that the query is aware of its location all the time. This method reduces the costs associated with communications in client-server architecture. Computational results show that our method is successful in handling moving query patterns. PMID:24696652
Jung, HaRim; Song, MoonBae; Youn, Hee Yong; Kim, Ung Mo
2015-09-18
A content-matched (CM) rangemonitoring query overmoving objects continually retrieves the moving objects (i) whose non-spatial attribute values are matched to given non-spatial query values; and (ii) that are currently located within a given spatial query range. In this paper, we propose a new query indexing structure, called the group-aware query region tree (GQR-tree) for efficient evaluation of CMrange monitoring queries. The primary role of the GQR-tree is to help the server leverage the computational capabilities of moving objects in order to improve the system performance in terms of the wireless communication cost and server workload. Through a series of comprehensive simulations, we verify the superiority of the GQR-tree method over the existing methods.
Come together, right now: dynamic overwriting of an object's history through common fate.
Luria, Roy; Vogel, Edward K
2014-08-01
The objects around us constantly move and interact, and the perceptual system needs to monitor on-line these interactions and to update the object's status accordingly. Gestalt grouping principles, such as proximity and common fate, play a fundamental role in how we perceive and group these objects. Here, we investigated situations in which the initial object representation as a separate item was updated by a subsequent Gestalt grouping cue (i.e., proximity or common fate). We used a version of the color change detection paradigm, in which the objects started to move separately, then met and stayed stationary, or moved separately, met, and then continued to move together. We monitored the object representations on-line using the contralateral delay activity (CDA; an ERP component indicative of the number of maintained objects), during their movement, and after the objects disappeared and became working memory representations. The results demonstrated that the objects' representations (as indicated by the CDA amplitude) persisted as being separate, even after a Gestalt proximity cue (when the objects "met" and remained stationary on the same position). Only a strong common fate Gestalt cue (when the objects not just met but also moved together) was able to override the objects' initial separate status, creating an integrated representation. These results challenge the view that Gestalt principles cause reflexive grouping. Instead, the object initial representation plays an important role that can override even powerful grouping cues.
Svoboda, Jan; Lobellová, Veronika; Popelíková, Anna; Ahuja, Nikhil; Kelemen, Eduard; Stuchlík, Aleš
2017-03-01
Although animals often learn and monitor the spatial properties of relevant moving objects such as conspecifics and predators to properly organize their own spatial behavior, the underlying brain substrate has received little attention and hence remains elusive. Because the anterior cingulate cortex (ACC) participates in conflict monitoring and effort-based decision making, and ACC neurons respond to objects in the environment, it may also play a role in the monitoring of moving cues and exerting the appropriate spatial response. We used a robot avoidance task in which a rat had to maintain at least a 25cm distance from a small programmable robot to avoid a foot shock. In successive sessions, we trained ten Long Evans male rats to avoid a fast-moving robot (4cm/s), a stationary robot, and a slow-moving robot (1cm/s). In each condition, the ACC was transiently inactivated by bilateral injections of muscimol in the penultimate session and a control saline injection was given in the last session. Compared to the corresponding saline session, ACC-inactivated rats received more shocks when tested in the fast-moving condition, but not in the stationary or slow robot conditions. Furthermore, ACC-inactivated rats less frequently responded to an approaching robot with appropriate escape responses although their response to shock stimuli remained preserved. Since we observed no effect on slow or stationary robot avoidance, we conclude that the ACC may exert cognitive efforts for monitoring dynamic updating of the position of an object, a role complementary to the dorsal hippocampus. Copyright © 2017 Elsevier Inc. All rights reserved.
Come Together, Right Now: Dynamic Overwriting of an Object’s History through Common Fate
Luria, Roy; Vogel, Edward K.
2015-01-01
The objects around us constantly move and interact, and the perceptual system needs to monitor on-line these interactions and to update the object’s status accordingly. Gestalt grouping principles, such as proximity and common fate, play a fundamental role in how we perceive and group these objects. Here, we investigated situations in which the initial object representation as a separate item was updated by a subsequent Gestalt grouping cue (i.e., proximity or common fate). We used a version of the color change detection paradigm, in which the objects started to move separately, then met and stayed stationary, or moved separately, met, and then continued to move together. We monitored the object representations on-line using the contralateral delay activity (CDA; an ERP component indicative of the number of maintained objects), during their movement, and after the objects disappeared and became working memory representations. The results demonstrated that the objects’ representations (as indicated by the CDA amplitude) persisted as being separate, even after a Gestalt proximity cue (when the objects “met” and remained stationary on the same position). Only a strong common fate Gestalt cue (when the objects not just met but also moved together) was able to override the objects’ initial separate status, creating an integrated representation. These results challenge the view that Gestalt principles cause reflexive grouping. Instead, the object initial representation plays an important role that can override even powerful grouping cues. PMID:24564468
NASA Astrophysics Data System (ADS)
Krapukhina, Nina; Senchenko, Roman; Kamenov, Nikolay
2017-12-01
Road safety and driving in dense traffic flows poses some challenges in receiving information about surrounding moving object, some of which can be in the vehicle's blind spot. This work suggests an approach to virtual monitoring of the objects in a current road scene via a system with a multitude of cooperating smart vehicles exchanging information. It also describes the intellectual agent model, and provides methods and algorithms of identifying and evaluating various characteristics of moving objects in video flow. Authors also suggest ways for integrating the information from the technical vision system into the model with further expansion of virtual monitoring for the system's objects. Implementation of this approach can help to expand the virtual field of view for a technical vision system.
The monocular visual imaging technology model applied in the airport surface surveillance
NASA Astrophysics Data System (ADS)
Qin, Zhe; Wang, Jian; Huang, Chao
2013-08-01
At present, the civil aviation airports use the surface surveillance radar monitoring and positioning systems to monitor the aircrafts, vehicles and the other moving objects. Surface surveillance radars can cover most of the airport scenes, but because of the terminals, covered bridges and other buildings geometry, surface surveillance radar systems inevitably have some small segment blind spots. This paper presents a monocular vision imaging technology model for airport surface surveillance, achieving the perception of scenes of moving objects such as aircrafts, vehicles and personnel location. This new model provides an important complement for airport surface surveillance, which is different from the traditional surface surveillance radar techniques. Such technique not only provides clear objects activities screen for the ATC, but also provides image recognition and positioning of moving targets in this area. Thereby it can improve the work efficiency of the airport operations and avoid the conflict between the aircrafts and vehicles. This paper first introduces the monocular visual imaging technology model applied in the airport surface surveillance and then the monocular vision measurement accuracy analysis of the model. The monocular visual imaging technology model is simple, low cost, and highly efficient. It is an advanced monitoring technique which can make up blind spot area of the surface surveillance radar monitoring and positioning systems.
Implementation of a pilot continuous monitoring system : Iowa Falls Arch Bridge.
DOT National Transportation Integrated Search
2015-06-01
The goal of this work was to move structural health monitoring (SHM) one step closer to being ready for mainstream use by : the Iowa Department of Transportation (DOT) Office of Bridges and Structures. To meet this goal, the objective of this project...
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
In this commentary we present the findings from an international consortium on fish toxicogenomics sponsored by the UK Natural Environment Research Council (NERC) with an objective of moving omic technologies into chemical risk assessment and environmental monitoring. Objectiv...
Method and apparatus for non-contact charge measurement
NASA Technical Reports Server (NTRS)
Wang, Taylor G. (Inventor); Lin, Kuan-Chan (Inventor); Hightower, James C. (Inventor)
1994-01-01
A method and apparatus for the accurate non-contact detection and measurement of static electric charge on an object using a reciprocating sensing probe that moves relative to the object. A monitor measures the signal generated as a result of this cyclical movement so as to detect the electrostatic charge on the object.
Integration across Time Determines Path Deviation Discrimination for Moving Objects
Whitaker, David; Levi, Dennis M.; Kennedy, Graeme J.
2008-01-01
Background Human vision is vital in determining our interaction with the outside world. In this study we characterize our ability to judge changes in the direction of motion of objects–a common task which can allow us either to intercept moving objects, or else avoid them if they pose a threat. Methodology/Principal Findings Observers were presented with objects which moved across a computer monitor on a linear path until the midline, at which point they changed their direction of motion, and observers were required to judge the direction of change. In keeping with the variety of objects we encounter in the real world, we varied characteristics of the moving stimuli such as velocity, extent of motion path and the object size. Furthermore, we compared performance for moving objects with the ability of observers to detect a deviation in a line which formed the static trace of the motion path, since it has been suggested that a form of static memory trace may form the basis for these types of judgment. The static line judgments were well described by a ‘scale invariant’ model in which any two stimuli which possess the same two-dimensional geometry (length/width) result in the same level of performance. Performance for the moving objects was entirely different. Irrespective of the path length, object size or velocity of motion, path deviation thresholds depended simply upon the duration of the motion path in seconds. Conclusions/Significance Human vision has long been known to integrate information across space in order to solve spatial tasks such as judgment of orientation or position. Here we demonstrate an intriguing mechanism which integrates direction information across time in order to optimize the judgment of path deviation for moving objects. PMID:18414653
Object tracking via background subtraction for monitoring illegal activity in crossroad
NASA Astrophysics Data System (ADS)
Ghimire, Deepak; Jeong, Sunghwan; Park, Sang Hyun; Lee, Joonwhoan
2016-07-01
In the field of intelligent transportation system a great number of vision-based techniques have been proposed to prevent pedestrians from being hit by vehicles. This paper presents a system that can perform pedestrian and vehicle detection and monitoring of illegal activity in zebra crossings. In zebra crossing, according to the traffic light status, to fully avoid a collision, a driver or pedestrian should be warned earlier if they possess any illegal moves. In this research, at first, we detect the traffic light status of pedestrian and monitor the crossroad for vehicle pedestrian moves. The background subtraction based object detection and tracking is performed to detect pedestrian and vehicles in crossroads. Shadow removal, blob segmentation, trajectory analysis etc. are used to improve the object detection and classification performance. We demonstrate the experiment in several video sequences which are recorded in different time and environment such as day time and night time, sunny and raining environment. Our experimental results show that such simple and efficient technique can be used successfully as a traffic surveillance system to prevent accidents in zebra crossings.
Research on moving object detection based on frog's eyes
NASA Astrophysics Data System (ADS)
Fu, Hongwei; Li, Dongguang; Zhang, Xinyuan
2008-12-01
On the basis of object's information processing mechanism with frog's eyes, this paper discussed a bionic detection technology which suitable for object's information processing based on frog's vision. First, the bionics detection theory by imitating frog vision is established, it is an parallel processing mechanism which including pick-up and pretreatment of object's information, parallel separating of digital image, parallel processing, and information synthesis. The computer vision detection system is described to detect moving objects which has special color, special shape, the experiment indicates that it can scheme out the detecting result in the certain interfered background can be detected. A moving objects detection electro-model by imitating biologic vision based on frog's eyes is established, the video simulative signal is digital firstly in this system, then the digital signal is parallel separated by FPGA. IN the parallel processing, the video information can be caught, processed and displayed in the same time, the information fusion is taken by DSP HPI ports, in order to transmit the data which processed by DSP. This system can watch the bigger visual field and get higher image resolution than ordinary monitor systems. In summary, simulative experiments for edge detection of moving object with canny algorithm based on this system indicate that this system can detect the edge of moving objects in real time, the feasibility of bionic model was fully demonstrated in the engineering system, and it laid a solid foundation for the future study of detection technology by imitating biologic vision.
Infrared Thermography Sensor for Temperature and Speed Measurement of Moving Material.
Usamentiaga, Rubén; García, Daniel Fernando
2017-05-18
Infrared thermography offers significant advantages in monitoring the temperature of objects over time, but crucial aspects need to be addressed. Movements between the infrared camera and the inspected material seriously affect the accuracy of the calculated temperature. These movements can be the consequence of solid objects that are moved, molten metal poured, material on a conveyor belt, or just vibrations. This work proposes a solution for monitoring the temperature of material in these scenarios. In this work both real movements and vibrations are treated equally, proposing a unified solution for both problems. The three key steps of the proposed procedure are image rectification, motion estimation and motion compensation. Image rectification calculates a front-parallel projection of the image that simplifies the estimation and compensation of the movement. Motion estimation describes the movement using a mathematical model, and estimates the coefficients using robust methods adapted to infrared images. Motion is finally compensated for in order to produce the correct temperature time history of the monitored material regardless of the movement. The result is a robust sensor for temperature of moving material that can also be used to measure the speed of the material. Different experiments are carried out to validate the proposed method in laboratory and real environments. Results show excellent performance.
Infrared Thermography Sensor for Temperature and Speed Measurement of Moving Material
Usamentiaga, Rubén; García, Daniel Fernando
2017-01-01
Infrared thermography offers significant advantages in monitoring the temperature of objects over time, but crucial aspects need to be addressed. Movements between the infrared camera and the inspected material seriously affect the accuracy of the calculated temperature. These movements can be the consequence of solid objects that are moved, molten metal poured, material on a conveyor belt, or just vibrations. This work proposes a solution for monitoring the temperature of material in these scenarios. In this work both real movements and vibrations are treated equally, proposing a unified solution for both problems. The three key steps of the proposed procedure are image rectification, motion estimation and motion compensation. Image rectification calculates a front-parallel projection of the image that simplifies the estimation and compensation of the movement. Motion estimation describes the movement using a mathematical model, and estimates the coefficients using robust methods adapted to infrared images. Motion is finally compensated for in order to produce the correct temperature time history of the monitored material regardless of the movement. The result is a robust sensor for temperature of moving material that can also be used to measure the speed of the material. Different experiments are carried out to validate the proposed method in laboratory and real environments. Results show excellent performance. PMID:28524110
NASA Astrophysics Data System (ADS)
Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa
2006-02-01
Recently, monitoring cameras for security have been extensively increasing. However, it is normally difficult to know when and where we are monitored by these cameras and how the recorded images are stored and/or used. Therefore, how to protect privacy in the recorded images is a crucial issue. In this paper, we address this problem and introduce a framework for security monitoring systems considering the privacy protection. We state requirements for monitoring systems in this framework. We propose a possible implementation that satisfies the requirements. To protect privacy of recorded objects, they are made invisible by appropriate image processing techniques. Moreover, the original objects are encrypted and watermarked into the image with the "invisible" objects, which is coded by the JPEG standard. Therefore, the image decoded by a normal JPEG viewer includes the objects that are unrecognized or invisible. We also introduce in this paper a so-called "special viewer" in order to decrypt and display the original objects. This special viewer can be used by limited users when necessary for crime investigation, etc. The special viewer allows us to choose objects to be decoded and displayed. Moreover, in this proposed system, real-time processing can be performed, since no future frame is needed to generate a bitstream.
Peripheral Visual Cues Contribute to the Perception of Object Movement During Self-Movement
Rogers, Cassandra; Warren, Paul A.
2017-01-01
Safe movement through the environment requires us to monitor our surroundings for moving objects or people. However, identification of moving objects in the scene is complicated by self-movement, which adds motion across the retina. To identify world-relative object movement, the brain thus has to ‘compensate for’ or ‘parse out’ the components of retinal motion that are due to self-movement. We have previously demonstrated that retinal cues arising from central vision contribute to solving this problem. Here, we investigate the contribution of peripheral vision, commonly thought to provide strong cues to self-movement. Stationary participants viewed a large field of view display, with radial flow patterns presented in the periphery, and judged the trajectory of a centrally presented probe. Across two experiments, we demonstrate and quantify the contribution of peripheral optic flow to flow parsing during forward and backward movement. PMID:29201335
Howard, Christina J; Rollings, Victoria; Hardie, Amy
2017-06-01
In tasks where people monitor moving objects, such the multiple object tracking task (MOT), observers attempt to keep track of targets as they move amongst distracters. The literature is mixed as to whether observers make use of motion information to facilitate performance. We sought to address this by two means: first by superimposing arrows on objects which varied in their informativeness about motion direction and second by asking observers to attend to motion direction. Using a position monitoring task, we calculated mean error magnitudes as a measure of the precision with which target positions are represented. We also calculated perceptual lags versus extrapolated reports, which are the times at which positions of targets best match position reports. We find that the presence of motion information in the form of superimposed arrows made no difference to position report precision nor perceptual lag. However, when we explicitly instructed observers to attend to motion, we saw facilitatory effects on position reports and in some cases reports that best matched extrapolated rather than lagging positions for small set sizes. The results indicate that attention to changing positions does not automatically recruit attention to motion, showing a dissociation between sustained attention to changing positions and attention to motion. Copyright © 2017 Elsevier Ltd. All rights reserved.
CCD high-speed videography system with new concepts and techniques
NASA Astrophysics Data System (ADS)
Zheng, Zengrong; Zhao, Wenyi; Wu, Zhiqiang
1997-05-01
A novel CCD high speed videography system with brand-new concepts and techniques is developed by Zhejiang University recently. The system can send a series of short flash pulses to the moving object. All of the parameters, such as flash numbers, flash durations, flash intervals, flash intensities and flash colors, can be controlled according to needs by the computer. A series of moving object images frozen by flash pulses, carried information of moving object, are recorded by a CCD video camera, and result images are sent to a computer to be frozen, recognized and processed with special hardware and software. Obtained parameters can be displayed, output as remote controlling signals or written into CD. The highest videography frequency is 30,000 images per second. The shortest image freezing time is several microseconds. The system has been applied to wide fields of energy, chemistry, medicine, biological engineering, aero- dynamics, explosion, multi-phase flow, mechanics, vibration, athletic training, weapon development and national defense engineering. It can also be used in production streamline to carry out the online, real-time monitoring and controlling.
NASA Astrophysics Data System (ADS)
Kuschmierz, R.; Czarske, J.; Fischer, A.
2014-08-01
Optical measurement techniques offer great opportunities in diverse applications, such as lathe monitoring and microfluidics. Doppler-based interferometric techniques enable simultaneous measurement of the lateral velocity and axial distance of a moving object. However, there is a complementarity between the unambiguous axial measurement range and the uncertainty of the distance. Therefore, we present an extended sensor setup, which provides an unambiguous axial measurement range of 1 mm while achieving uncertainties below 100 nm. Measurements at a calibration system are performed. When using a pinhole for emulating a single scattering particle, the tumbling motion of the rotating object is resolved with a distance uncertainty of 50 nm. For measurements at the rough surface, the distance uncertainty amounts to 280 nm due to a lower signal-to-noise ratio. Both experimental results are close to the respective Cramér-Rao bound, which is derived analytically for both surface and single particle measurements.
USDA-ARS?s Scientific Manuscript database
The objectives of this study were to characterize wireless sensor nodes that we developed in terms of power consumption and functionality, and compare the performance of mesh and non-mesh wireless sensor networks (WSNs) comprised mainly of infrared thermometer thermocouples located on a center pivot...
Real-time Human Activity Recognition
NASA Astrophysics Data System (ADS)
Albukhary, N.; Mustafah, Y. M.
2017-11-01
The traditional Closed-circuit Television (CCTV) system requires human to monitor the CCTV for 24/7 which is inefficient and costly. Therefore, there’s a need for a system which can recognize human activity effectively in real-time. This paper concentrates on recognizing simple activity such as walking, running, sitting, standing and landing by using image processing techniques. Firstly, object detection is done by using background subtraction to detect moving object. Then, object tracking and object classification are constructed so that different person can be differentiated by using feature detection. Geometrical attributes of tracked object, which are centroid and aspect ratio of identified tracked are manipulated so that simple activity can be detected.
Security Event Recognition for Visual Surveillance
NASA Astrophysics Data System (ADS)
Liao, W.; Yang, C.; Yang, M. Ying; Rosenhahn, B.
2017-05-01
With rapidly increasing deployment of surveillance cameras, the reliable methods for automatically analyzing the surveillance video and recognizing special events are demanded by different practical applications. This paper proposes a novel effective framework for security event analysis in surveillance videos. First, convolutional neural network (CNN) framework is used to detect objects of interest in the given videos. Second, the owners of the objects are recognized and monitored in real-time as well. If anyone moves any object, this person will be verified whether he/she is its owner. If not, this event will be further analyzed and distinguished between two different scenes: moving the object away or stealing it. To validate the proposed approach, a new video dataset consisting of various scenarios is constructed for more complex tasks. For comparison purpose, the experiments are also carried out on the benchmark databases related to the task on abandoned luggage detection. The experimental results show that the proposed approach outperforms the state-of-the-art methods and effective in recognizing complex security events.
Real-time moving objects detection and tracking from airborne infrared camera
NASA Astrophysics Data System (ADS)
Zingoni, Andrea; Diani, Marco; Corsini, Giovanni
2017-10-01
Detecting and tracking moving objects in real-time from an airborne infrared (IR) camera offers interesting possibilities in video surveillance, remote sensing and computer vision applications, such as monitoring large areas simultaneously, quickly changing the point of view on the scene and pursuing objects of interest. To fully exploit such a potential, versatile solutions are needed, but, in the literature, the majority of them works only under specific conditions about the considered scenario, the characteristics of the moving objects or the aircraft movements. In order to overcome these limitations, we propose a novel approach to the problem, based on the use of a cheap inertial navigation system (INS), mounted on the aircraft. To exploit jointly the information contained in the acquired video sequence and the data provided by the INS, a specific detection and tracking algorithm has been developed. It consists of three main stages performed iteratively on each acquired frame. The detection stage, in which a coarse detection map is computed, using a local statistic both fast to calculate and robust to noise and self-deletion of the targeted objects. The registration stage, in which the position of the detected objects is coherently reported on a common reference frame, by exploiting the INS data. The tracking stage, in which the steady objects are rejected, the moving objects are tracked, and an estimation of their future position is computed, to be used in the subsequent iteration. The algorithm has been tested on a large dataset of simulated IR video sequences, recreating different environments and different movements of the aircraft. Promising results have been obtained, both in terms of detection and false alarm rate, and in terms of accuracy in the estimation of position and velocity of the objects. In addition, for each frame, the detection and tracking map has been generated by the algorithm, before the acquisition of the subsequent frame, proving its capability to work in real-time.
Research on moving target defense based on SDN
NASA Astrophysics Data System (ADS)
Chen, Mingyong; Wu, Weimin
2017-08-01
An address mutation strategy was proposed. This strategy provided an unpredictable change in address, replacing the real address of the packet forwarding process and path mutation, thus hiding the real address of the host and path. a mobile object defense technology based on Spatio-temporal Mutation on this basis is proposed, Using the software Defined Network centralized control architecture advantage combines sFlow traffic monitoring technology and Moving Target Defense. A mutated time period which can be changed in real time according to the network traffic is changed, and the destination address is changed while the controller abruptly changes the address while the data packet is transferred between the switches to construct a moving target, confusing the host within the network, thereby protecting the host and network.
More About The Video Event Trigger
NASA Technical Reports Server (NTRS)
Williams, Glenn L.
1996-01-01
Report presents additional information about system described in "Video Event Trigger" (LEW-15076). Digital electronic system processes video-image data to generate trigger signal when image shows significant change, such as motion, or appearance, disappearance, change in color, brightness, or dilation of object. Potential uses include monitoring of hallways, parking lots, and other areas during hours when supposed unoccupied, looking for fires, tracking airplanes or other moving objects, identification of missing or defective parts on production lines, and video recording of automobile crash tests.
Infrared system for monitoring movement of objects
Valentine, Kenneth H.; Falter, Diedre D.; Falter, Kelly G.
1991-01-01
A system for monitoring moving objects, such as the flight of honeybees and other insects, using a pulsed laser light source. This system has a self-powered micro-miniaturized transmitting unit powered, in the preferred embodiment, with an array solar cells. This transmitting unit is attached to the object to be monitored. These solar cells provide current to a storage energy capacitor to produce, for example, five volts for the operation of the transmitter. In the simplest embodiment, the voltage on the capacitor operates a pulse generator to provide a pulsed energizing signal to one or more very small laser diodes. The pulsed light is then received at a receiving base station using substantially standard means which converts the light to an electrical signal for processing in a microprocessor to create the information as to the movement of the object. In the case of a unit for monitoring honeybees and other insects, the transmitting unit weighs less than 50 mg, and has a size no larger than 1.times.3.times.5 millimeters. Also, the preferred embodiment provides for the coding of the light to uniquely identify the particular transmitting unit that is being monitored. A "wake-up" circuit is provided in the preferred embodiment whereby there is no transmission until the voltage on the capacitor has exceeded a pre-set threshold. Various other uses of the motion-detection system are described.
Infrared system for monitoring movement of objects
Valentine, K.H.; Falter, D.D.; Falter, K.G.
1991-04-30
A system is described for monitoring moving objects, such as the flight of honeybees and other insects, using a pulsed laser light source. This system has a self-powered micro-miniaturized transmitting unit powered, in the preferred embodiment, with an array of solar cells. This transmitting unit is attached to the object to be monitored. These solar cells provide current to a storage energy capacitor to produce, for example, five volts for the operation of the transmitter. In the simplest embodiment, the voltage on the capacitor operates a pulse generator to provide a pulsed energizing signal to one or more very small laser diodes. The pulsed light is then received at a receiving base station using substantially standard means which converts the light to an electrical signal for processing in a microprocessor to create the information as to the movement of the object. In the case of a unit for monitoring honeybees and other insects, the transmitting unit weighs less than 50 mg, and has a size no larger than 1[times]3[times]5 millimeters. Also, the preferred embodiment provides for the coding of the light to uniquely identify the particular transmitting unit that is being monitored. A wake-up' circuit is provided in the preferred embodiment whereby there is no transmission until the voltage on the capacitor has exceeded a pre-set threshold. Various other uses of the motion-detection system are described. 4 figures.
Near real-time analysis of tritium in treated water
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skibo, A.
The Tokyo Electric Power Company (TEPCO) is managing large quantities of treated water at the Fukushima Daiichi Nuclear Power Station. Moving forward, TEPCO will be discharging from the site clean water that meets agreed criteria. As part of agreements with stakeholders, TEPCO is planning to carefully monitor the water prior to discharge to assure compliance. The objective of this proposal is to support implementation of an on-line “real-time” (continuous or semi-continuous) tritium monitor that will reliably measure levels down to the agreed target 1500 Becquerels per liter (Bq/L).
2009-01-08
CAPE CANAVERAL, Fla. -- In the Space Station Processing Facility at NASA's Kennedy Space Center in Florida, the MAXI (Monitor of All-sky X-ray Image) is moved toward the Japanese Experiment Module's Experiment Logistics Module-Exposed Section, or ELM-ES, where it will be installed. The MAXI is part of space shuttle Endeavour's payload on the STS-127 mission. Using X-ray slit cameras with high sensitivity, the MAXI will continuously monitor astronomical X-ray objects over a broad energy band (0.5 to 30 keV). Endeavour is targeted to launch May 15. Photo credit: NASA/Jim Grossmann
NASA Astrophysics Data System (ADS)
Zingoni, Andrea; Diani, Marco; Corsini, Giovanni
2016-10-01
We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.
Measuring attention using induced motion.
Gogel, W C; Sharkey, T J
1989-01-01
Attention was measured by means of its effect upon induced motion. Perceived horizontal motion was induced in a vertically moving test spot by the physical horizontal motion of inducing objects. All stimuli were in a frontoparallel plane. The induced motion vectored with the physical motion to produce a clockwise or counterclockwise tilt in the apparent path of motion of the test spot. Either a single inducing object or two inducing objects moving in opposite directions were used. Twelve observers were instructed to attend to or to ignore the single inducing object while fixating the test object and, when the two opposing inducing objects were present, to attend to one inducing object while ignoring the other. Tracking of the test spot was visually monitored. The tilt of the path of apparent motion of the test spot was measured by tactile adjustment of a comparison rod. It was found that the measured tilt was substantially larger when the single inducing object was attended rather than ignored. For the two inducing objects, attending to one while ignoring the other clearly increased the effectiveness of the attended inducing object. The results are analyzed in terms of the distinction between voluntary and involuntary attention. The advantages of measuring attention by its effect on induced motion as compared with the use of a precueing procedure, and a hypothesis regarding the role of attention in modifying perceived spatial characteristics are discussed.
Pacific Northwest Aquatic Monitoring Partnership 2017 Annual Report
Puls, Amy L.; Scully, Rebecca A.; Dethloff, Megan M.; Bayer, Jennifer M.; Olson, Sheryn J.; Cimino, Samuel A.
2018-01-01
The Pacific Northwest Aquatic Monitoring Partnership (PNAMP) continued to promote the integration of monitoring resources and development of tools to support monitoring in 2017. Improved coordination and integration of goals, objectives, and activities among Pacific Northwest monitoring programs is essential to improving the quality and consistency of monitoring in the region.PNAMP operates through inter-organizational teams to make progress on a variety of projects identified to support partner needs and PNAMP goals. These teams are largely ad hoc and formed for the specific purpose of achieving the objectives of the identified projects. For each project, the PNAMP Coordination Team identified interested Steering Committee (SC) members and subject matter experts to form the working teams that provide guidance and leadership. In addition, the teams acted as an intermediate between the larger group of interested participants and the SC, thus maintaining the concept of better SC/participant exchange. The PNAMP Coordination Team continued to facilitate dialog among experts to move forward with ongoing and new projects. In addition, the Coordination Team continued their efforts to track in-kind contributions of time from participants at meetings, workshops, and other PNAMP hosted events; in 2017 this estimate amounted to 2,039 hours by 67 organizations.
Ernst, Zachary Raymond; Palmer, John; Boynton, Geoffrey M.
2012-01-01
In object-based attention, it is easier to divide attention between features within a single object than between features across objects. In this study we test the prediction of several capacity models in order to best characterize the cost to dividing attention between objects. Here we studied behavioral performance on a divided attention task in which subjects attended to the motion and luminance of overlapping random dot kinemategrams, specifically red upward moving dots superimposed with green downward moving dots. Subjects were required to detect brief changes (transients) in the motion or luminance within the same surface or across different surfaces. There were two primary results. First, the dual-task deficit was large when attention was divided across two surfaces and near zero when attention was divided within a surface. This is consistent with limited-capacity processing across surfaces and unlimited-capacity processing within a surface—a pattern predicted by established theories of object-based attention. Second and unexpectedly, there was evidence of crosstalk between features: when cued to monitor transients on one surface, response rates were inflated by the presence of a transient on the other surface. Such crosstalk is a failure of selective attention between surfaces. PMID:23149301
A sudden brightness decrease of the young pre-MS object GM Cep
NASA Astrophysics Data System (ADS)
Munari, U.; Castellani, F.; Giannini, T.; Antoniucci, S.; Lorenzetti, D.
2017-11-01
In the framework of our EXor monitoring programme dubbed EXORCISM (EXOR OptiCal and Infrared Systematic Monitoring - Antoniucci et al. 2013 PPVI, Lorenzetti et al. 2007 ApJ 665, 1182; Lorenzetti et al. 2009 ApJ 693, 1056), we observed a new fading of the optical brightness of the Young Stellar Object (YSO) GM Cep (d=870 pc). This is a well studied variable (Semkov & Peneva 2012 APSS,338,95; Ibryamov et al. 2015 PASA,32,11; Xiao, Kroll, & Henden 2010 AJ, 139, 1527; Sicilia-Aguilar et al. 2008 ApJ,673,382-3) whose light-curve is dominated by recurrent brightness dims, interpreted as non-periodical eclipse events due to orbiting dust structures that move along the line of sight (UXor-type variability - Grinin 1988).
A mobile agent-based moving objects indexing algorithm in location based service
NASA Astrophysics Data System (ADS)
Fang, Zhixiang; Li, Qingquan; Xu, Hong
2006-10-01
This paper will extends the advantages of location based service, specifically using their ability to management and indexing the positions of moving object, Moreover with this objective in mind, a mobile agent-based moving objects indexing algorithm is proposed in this paper to efficiently process indexing request and acclimatize itself to limitation of location based service environment. The prominent feature of this structure is viewing moving object's behavior as the mobile agent's span, the unique mapping between the geographical position of moving objects and span point of mobile agent is built to maintain the close relationship of them, and is significant clue for mobile agent-based moving objects indexing to tracking moving objects.
Line grouping using perceptual saliency and structure prediction for car detection in traffic scenes
NASA Astrophysics Data System (ADS)
Denasi, Sandra; Quaglia, Giorgio
1993-08-01
Autonomous and guide assisted vehicles make a heavy use of computer vision techniques to perceive the environment where they move. In this context, the European PROMETHEUS program is carrying on activities in order to develop autonomous vehicle monitoring that assists people to achieve safer driving. Car detection is one of the topics that are faced by the program. Our contribution proposes the development of this task in two stages: the localization of areas of interest and the formulation of object hypotheses. In particular, the present paper proposes a new approach that builds structural descriptions of objects from edge segmentations by using geometrical organization. This approach has been applied to the detection of cars in traffic scenes. We have analyzed images taken from a moving vehicle in order to formulate obstacle hypotheses: preliminary results confirm the efficiency of the method.
2009-01-08
CAPE CANAVERAL, Fla. -- In the Space Station Processing Facility at NASA's Kennedy Space Center in Florida, a crane is moved over the MAXI (Monitor of All-sky X-ray Image). The crane will lift the MAXI onto the Japanese Experiment Module's Experiment Logistics Module-Exposed Section, or ELM-ES, where it will be installed. The MAXI is part of space shuttle Endeavour's payload on the STS-127 mission. Using X-ray slit cameras with high sensitivity, the MAXI will continuously monitor astronomical X-ray objects over a broad energy band (0.5 to 30 keV). Endeavour is targeted to launch May 15. Photo credit: NASA/Jim Grossmann
2009-01-08
CAPE CANAVERAL, Fla. -- In the Space Station Processing Facility at NASA's Kennedy Space Center in Florida, a crane lifts the MAXI (Monitor of All-sky X-ray Image) to move it onto the Japanese Experiment Module's Experiment Logistics Module-Exposed Section, or ELM-ES, where it will be installed. The MAXI is part of space shuttle Endeavour's payload on the STS-127 mission. Using X-ray slit cameras with high sensitivity, the MAXI will continuously monitor astronomical X-ray objects over a broad energy band (0.5 to 30 keV). Endeavour is targeted to launch May 15. Photo credit: NASA/Jim Grossmann
2009-01-08
CAPE CANAVERAL, Fla. -- In the Space Station Processing Facility at NASA's Kennedy Space Center in Florida, a crane lifts the MAXI (Monitor of All-sky X-ray Image) to move it onto the Japanese Experiment Module's Experiment Logistics Module-Exposed Section, or ELM-ES, where it will be installed. The MAXI is part of space shuttle Endeavour's payload on the STS-127 mission. Using X-ray slit cameras with high sensitivity, the MAXI will continuously monitor astronomical X-ray objects over a broad energy band (0.5 to 30 keV). Endeavour is targeted to launch May 15. Photo credit: NASA/Jim Grossmann
A New Moving Object Detection Method Based on Frame-difference and Background Subtraction
NASA Astrophysics Data System (ADS)
Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong
2017-09-01
Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.
Robust feedback zoom tracking for digital video surveillance.
Zou, Tengyue; Tang, Xiaoqi; Song, Bao; Wang, Jin; Chen, Jihong
2012-01-01
Zoom tracking is an important function in video surveillance, particularly in traffic management and security monitoring. It involves keeping an object of interest in focus during the zoom operation. Zoom tracking is typically achieved by moving the zoom and focus motors in lenses following the so-called "trace curve", which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. The main task of a zoom tracking approach is to accurately estimate the trace curve for the specified object. Because a proportional integral derivative (PID) controller has historically been considered to be the best controller in the absence of knowledge of the underlying process and its high-quality performance in motor control, in this paper, we propose a novel feedback zoom tracking (FZT) approach based on the geometric trace curve estimation and PID feedback controller. The performance of this approach is compared with existing zoom tracking methods in digital video surveillance. The real-time implementation results obtained on an actual digital video platform indicate that the developed FZT approach not only solves the traditional one-to-many mapping problem without pre-training but also improves the robustness for tracking moving or switching objects which is the key challenge in video surveillance.
Privacy Protection by Masking Moving Objects for Security Cameras
NASA Astrophysics Data System (ADS)
Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa
Because of an increasing number of security cameras, it is crucial to establish a system that protects the privacy of objects in the recorded images. To this end, we propose a framework of image processing and data hiding for security monitoring and privacy protection. First, we state the requirements of the proposed monitoring systems and suggest possible implementation that satisfies those requirements. The underlying concept of our proposed framework is as follows: (1) in the recorded images, the objects whose privacy should be protected are deteriorated by appropriate image processing; (2) the original objects are encrypted and watermarked into the output image, which is encoded using an image compression standard; (3) real-time processing is performed such that no future frame is required to generate on output bitstream. It should be noted that in this framework, anyone can observe the decoded image that includes the deteriorated objects that are unrecognizable or invisible. On the other hand, for crime investigation, this system allows a limited number of users to observe the original objects by using a special viewer that decrypts and decodes the watermarked objects with a decoding password. Moreover, the special viewer allows us to select the objects to be decoded and displayed. We provide an implementation example, experimental results, and performance evaluations to support our proposed framework.
Liu, Shiqi; Li, Qifeng; Li, Yumei; Lv, Yi; Niu, Jianhua; Xu, Quan; Zhao, Jingru; Chen, Yajun; Wang, Dayong; Bai, Ruimiao
2018-06-01
This case study is concerning the meticulous observation of the moving process and track of 2 ingested needles using interval x-ray radiography, trying to localize the foreign bodies and reduce unnecessary exploration of digestive tract. An unusual case of a 1-year, 9-month-old female baby, with incarcerated hernia perforation caused by sewing needles with sharp ends, was reported herein. The patient had swallowed 2 sewing needles. One needle was excreted uneventfully after 8 days. On the contrary, the other needle stabbed the ileocecal junction wall into the right side of inguinal hernia sac after 9 days, and the patient received successful operation management. Interval x-ray confirmed that 1 needle-like foreign body moving down in 8 days until excretion along with feces. However, the other pierced into the incarcerated hernia. Preoperative x-ray radiography successfully monitored the moving process and tract of the sewing needles. Considering the penetrating-migrating nature of the foreign bodies, once the sharp-pointed objects were located, they should be removed as the mortality and risk of related complications may be increased. Interval x-ray radiography represents a meticulous preoperative monitoring method of the moving process and tract of needle-like foreign bodies. Interval x-ray with real-time images accurately detecting the moving foreign bodies could be help to reduce the unnecessary exploration of digestive tract and subsequently prevent possible complications. Based on the basic findings from the interval x-ray, treatment choices of endoscopic removal and surgical intervention may be attempted.
Monitoring Aircraft Motion at Airports by LIDAR
NASA Astrophysics Data System (ADS)
Toth, C.; Jozkow, G.; Koppanyi, Z.; Young, S.; Grejner-Brzezinska, D.
2016-06-01
Improving sensor performance, combined with better affordability, provides better object space observability, resulting in new applications. Remote sensing systems are primarily concerned with acquiring data of the static components of our environment, such as the topographic surface of the earth, transportation infrastructure, city models, etc. Observing the dynamic component of the object space is still rather rare in the geospatial application field; vehicle extraction and traffic flow monitoring are a few examples of using remote sensing to detect and model moving objects. Deploying a network of inexpensive LiDAR sensors along taxiways and runways can provide both geometrically and temporally rich geospatial data that aircraft body can be extracted from the point cloud, and then, based on consecutive point clouds motion parameters can be estimated. Acquiring accurate aircraft trajectory data is essential to improve aviation safety at airports. This paper reports about the initial experiences obtained by using a network of four Velodyne VLP- 16 sensors to acquire data along a runway segment.
Human detection in sensitive security areas through recognition of omega shapes using MACH filters
NASA Astrophysics Data System (ADS)
Rehman, Saad; Riaz, Farhan; Hassan, Ali; Liaquat, Muwahida; Young, Rupert
2015-03-01
Human detection has gained considerable importance in aggravated security scenarios over recent times. An effective security application relies strongly on detailed information regarding the scene under consideration. A larger accumulation of humans than the number of personal authorized to visit a security controlled area must be effectively detected, amicably alarmed and immediately monitored. A framework involving a novel combination of some existing techniques allows an immediate detection of an undesirable crowd in a region under observation. Frame differencing provides a clear visibility of moving objects while highlighting those objects in each frame acquired by a real time camera. Training of a correlation pattern recognition based filter on desired shapes such as elliptical representations of human faces (variants of an Omega Shape) yields correct detections. The inherent ability of correlation pattern recognition filters caters for angular rotations in the target object and renders decision regarding the existence of the number of persons exceeding an allowed figure in the monitored area.
Beran, Michael J.; Perdue, Bonnie M.; Futch, Sara E.; Smith, J. David; Evans, Theodore A.; Parrish, Audrey E.
2015-01-01
Three chimpanzees performed a computerized memory task in which auditory feedback about the accuracy of each response was delayed. The delivery of food rewards for correct responses was also delayed and occurred in a separate location from the response. Crucially, if the chimpanzees did not move to the reward-delivery site before food was dispensed, the reward was lost and could not be recovered. Chimpanzees were significantly more likely to move to the dispenser on trials they had completed correctly than on those they had completed incorrectly, and these movements occurred before any external feedback about the outcome of their responses. Thus, chimpanzees moved (or not) on the basis of their confidence in their responses, and these confidence movements aligned closely with objective task performance. These untrained, spontaneous confidence judgments demonstrated that chimpanzees monitored their own states of knowing and not knowing and adjusted their behavior accordingly. PMID:26057831
Real-time detection of moving objects from moving vehicles using dense stereo and optical flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include realtime, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify & other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Real-time detection of moving objects from moving vehicles using dense stereo and optical flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time, dense stereo system to include real-time, dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identity other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6-DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop, computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Real-time Detection of Moving Objects from Moving Vehicles Using Dense Stereo and Optical Flow
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2004-01-01
Dynamic scene perception is very important for autonomous vehicles operating around other moving vehicles and humans. Most work on real-time object tracking from moving platforms has used sparse features or assumed flat scene structures. We have recently extended a real-time. dense stereo system to include realtime. dense optical flow, enabling more comprehensive dynamic scene analysis. We describe algorithms to robustly estimate 6-DOF robot egomotion in the presence of moving objects using dense flow and dense stereo. We then use dense stereo and egomotion estimates to identify other moving objects while the robot itself is moving. We present results showing accurate egomotion estimation and detection of moving people and vehicles under general 6DOF motion of the robot and independently moving objects. The system runs at 18.3 Hz on a 1.4 GHz Pentium M laptop. computing 160x120 disparity maps and optical flow fields, egomotion, and moving object segmentation. We believe this is a significant step toward general unconstrained dynamic scene analysis for mobile robots, as well as for improved position estimation where GPS is unavailable.
Shape-based human detection for threat assessment
NASA Astrophysics Data System (ADS)
Lee, Dah-Jye; Zhan, Pengcheng; Thomas, Aaron; Schoenberger, Robert B.
2004-07-01
Detection of intrusions for early threat assessment requires the capability of distinguishing whether the intrusion is a human, an animal, or other objects. Most low-cost security systems use simple electronic motion detection sensors to monitor motion or the location of objects within the perimeter. Although cost effective, these systems suffer from high rates of false alarm, especially when monitoring open environments. Any moving objects including animals can falsely trigger the security system. Other security systems that utilize video equipment require human interpretation of the scene in order to make real-time threat assessment. Shape-based human detection technique has been developed for accurate early threat assessments for open and remote environment. Potential threats are isolated from the static background scene using differential motion analysis and contours of the intruding objects are extracted for shape analysis. Contour points are simplified by removing redundant points connecting short and straight line segments and preserving only those with shape significance. Contours are represented in tangent space for comparison with shapes stored in database. Power cepstrum technique has been developed to search for the best matched contour in database and to distinguish a human from other objects from different viewing angles and distances.
Grand, Laszlo; Ftomov, Sergiu; Timofeev, Igor
2012-01-01
Parallel electrophysiological recording and behavioral monitoring of freely moving animals is essential for a better understanding of the neural mechanisms underlying behavior. In this paper we describe a novel wireless recording technique, which is capable of synchronously recording in vivo multichannel electrophysiological (LFP, MUA, EOG, EMG) and activity data (accelerometer, video) from freely moving cats. The method is based on the integration of commercially available components into a simple monitoring system and is complete with accelerometers and the needed signal processing tools. LFP activities of freely moving group-housed cats were recorded from multiple intracortical areas and from the hippocampus. EMG, EOG, accelerometer and video were simultaneously acquired with LFP activities 24-h a day for 3 months. These recordings confirm the possibility of using our wireless method for 24-h long-term monitoring of neurophysiological and behavioral data of freely moving experimental animals such as cats, ferrets, rabbits and other large animals. PMID:23099345
Effects of Implied Motion and Facing Direction on Positional Preferences in Single-Object Pictures.
Palmer, Stephen E; Langlois, Thomas A
2017-07-01
Palmer, Gardner, and Wickens studied aesthetic preferences for pictures of single objects and found a strong inward bias: Right-facing objects were preferred left-of-center and left-facing objects right-of-center. They found no effect of object motion (people and cars showed the same inward bias as chairs and teapots), but the objects were not depicted as moving. Here we measured analogous inward biases with objects depicted as moving with an implied direction and speed by having participants drag-and-drop target objects into the most aesthetically pleasing position. In Experiment 1, human figures were shown diving or falling while moving forward or backward. Aesthetic biases were evident for both inward-facing and inward-moving figures, but the motion-based bias dominated so strongly that backward divers or fallers were preferred moving inward but facing outward. Experiment 2 investigated implied speed effects using images of humans, horses, and cars moving at different speeds (e.g., standing, walking, trotting, and galloping horses). Inward motion or facing biases were again present, and differences in their magnitude due to speed were evident. Unexpectedly, faster moving objects were generally preferred closer to frame center than slower moving objects. These results are discussed in terms of the combined effects of prospective, future-oriented biases, and retrospective, past-oriented biases.
Incidents Prediction in Road Junctions Using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Hajji, Tarik; Alami Hassani, Aicha; Ouazzani Jamil, Mohammed
2018-05-01
The implementation of an incident detection system (IDS) is an indispensable operation in the analysis of the road traffics. However the IDS may, in no case, represent an alternative to the classical monitoring system controlled by the human eye. The aim of this work is to increase detection and prediction probability of incidents in camera-monitored areas. Knowing that, these areas are monitored by multiple cameras and few supervisors. Our solution is to use Artificial Neural Networks (ANN) to analyze moving objects trajectories on captured images. We first propose a modelling of the trajectories and their characteristics, after we develop a learning database for valid and invalid trajectories, and then we carry out a comparative study to find the artificial neural network architecture that maximizes the rate of valid and invalid trajectories recognition.
Robust Feedback Zoom Tracking for Digital Video Surveillance
Zou, Tengyue; Tang, Xiaoqi; Song, Bao; Wang, Jin; Chen, Jihong
2012-01-01
Zoom tracking is an important function in video surveillance, particularly in traffic management and security monitoring. It involves keeping an object of interest in focus during the zoom operation. Zoom tracking is typically achieved by moving the zoom and focus motors in lenses following the so-called “trace curve”, which shows the in-focus motor positions versus the zoom motor positions for a specific object distance. The main task of a zoom tracking approach is to accurately estimate the trace curve for the specified object. Because a proportional integral derivative (PID) controller has historically been considered to be the best controller in the absence of knowledge of the underlying process and its high-quality performance in motor control, in this paper, we propose a novel feedback zoom tracking (FZT) approach based on the geometric trace curve estimation and PID feedback controller. The performance of this approach is compared with existing zoom tracking methods in digital video surveillance. The real-time implementation results obtained on an actual digital video platform indicate that the developed FZT approach not only solves the traditional one-to-many mapping problem without pre-training but also improves the robustness for tracking moving or switching objects which is the key challenge in video surveillance. PMID:22969388
NASA Astrophysics Data System (ADS)
Li, Z.; Che, W.; Frey, H. C.; Lau, A. K. H.
2016-12-01
Portable air monitors are currently being developed and used to enable a move towards exposure monitoring as opposed to fixed site monitoring. Reliable methods are needed regarding capturing spatial and temporal variability in exposure concentration to obtain credible data from which to develop efficient exposure mitigation measures. However, there are few studies that quantify the validity and repeatability of the collected data. The objective of this study is to present and evaluate a collocated exposure monitoring (CEM) methodology including the calibration of portable air monitors against stationary reference equipment, side-by-side comparison of portable air monitors, personal or microenvironmental exposure monitoring and the processing and interpretation of the collected data. The CEM methodology was evaluated based on application to portable monitors TSI DustTrak II Aerosol Monitor 8530 for fine particulate matter (PM2.5) and TSI Q-Trak model 7575 with probe model 982 for CO, CO2, temperature and relative humidity. Taking a school sampling campaign in Hong Kong in January and June, 2015 as an example, the calibrated side-by-side measured 1 Hz PM2.5 concentrations showed good consistency between two sets of portable air monitors. Confidence in side-by-side comparison, PM2.5 concentrations of which most of the time were within 2 percent, enabled robust inference regarding differences when the monitors measured in classroom and pedestrian during school hour. The proposed CEM methodology can be widely applied in sampling campaigns with the objective of simultaneously characterizing pollutant concentrations in two or more locations or microenvironments. The further application of the CEM methodology to transportation exposure will be presented and discussed.
Going, Going, Gone: Localizing Abrupt Offsets of Moving Objects
ERIC Educational Resources Information Center
Maus, Gerrit W.; Nijhawan, Romi
2009-01-01
When a moving object abruptly disappears, this profoundly influences its localization by the visual system. In Experiment 1, 2 aligned objects moved across the screen, and 1 of them abruptly disappeared. Observers reported seeing the objects misaligned at the time of the offset, with the continuing object leading. Experiment 2 showed that the…
Mining moving object trajectories in location-based services for spatio-temporal database update
NASA Astrophysics Data System (ADS)
Guo, Danhuai; Cui, Weihong
2008-10-01
Advances in wireless transmission and mobile technology applied to LBS (Location-based Services) flood us with amounts of moving objects data. Vast amounts of gathered data from position sensors of mobile phones, PDAs, or vehicles hide interesting and valuable knowledge and describe the behavior of moving objects. The correlation between temporal moving patterns of moving objects and geo-feature spatio-temporal attribute was ignored, and the value of spatio-temporal trajectory data was not fully exploited too. Urban expanding or frequent town plan change bring about a large amount of outdated or imprecise data in spatial database of LBS, and they cannot be updated timely and efficiently by manual processing. In this paper we introduce a data mining approach to movement pattern extraction of moving objects, build a model to describe the relationship between movement patterns of LBS mobile objects and their environment, and put up with a spatio-temporal database update strategy in LBS database based on trajectories spatiotemporal mining. Experimental evaluation reveals excellent performance of the proposed model and strategy. Our original contribution include formulation of model of interaction between trajectory and its environment, design of spatio-temporal database update strategy based on moving objects data mining, and the experimental application of spatio-temporal database update by mining moving objects trajectories.
Detection and Tracking of Moving Objects with Real-Time Onboard Vision System
NASA Astrophysics Data System (ADS)
Erokhin, D. Y.; Feldman, A. B.; Korepanov, S. E.
2017-05-01
Detection of moving objects in video sequence received from moving video sensor is a one of the most important problem in computer vision. The main purpose of this work is developing set of algorithms, which can detect and track moving objects in real time computer vision system. This set includes three main parts: the algorithm for estimation and compensation of geometric transformations of images, an algorithm for detection of moving objects, an algorithm to tracking of the detected objects and prediction their position. The results can be claimed to create onboard vision systems of aircraft, including those relating to small and unmanned aircraft.
A-Track: Detecting Moving Objects in FITS images
NASA Astrophysics Data System (ADS)
Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.
2017-04-01
A-Track is a fast, open-source, cross-platform pipeline for detecting moving objects (asteroids and comets) in sequential telescope images in FITS format. The moving objects are detected using a modified line detection algorithm.
Real Time Search Algorithm for Observation Outliers During Monitoring Engineering Constructions
NASA Astrophysics Data System (ADS)
Latos, Dorota; Kolanowski, Bogdan; Pachelski, Wojciech; Sołoducha, Ryszard
2017-12-01
Real time monitoring of engineering structures in case of an emergency of disaster requires collection of a large amount of data to be processed by specific analytical techniques. A quick and accurate assessment of the state of the object is crucial for a probable rescue action. One of the more significant evaluation methods of large sets of data, either collected during a specified interval of time or permanently, is the time series analysis. In this paper presented is a search algorithm for those time series elements which deviate from their values expected during monitoring. Quick and proper detection of observations indicating anomalous behavior of the structure allows to take a variety of preventive actions. In the algorithm, the mathematical formulae used provide maximal sensitivity to detect even minimal changes in the object's behavior. The sensitivity analyses were conducted for the algorithm of moving average as well as for the Douglas-Peucker algorithm used in generalization of linear objects in GIS. In addition to determining the size of deviations from the average it was used the so-called Hausdorff distance. The carried out simulation and verification of laboratory survey data showed that the approach provides sufficient sensitivity for automatic real time analysis of large amount of data obtained from different and various sensors (total stations, leveling, camera, radar).
Distributed multirobot sensing and tracking: a behavior-based approach
NASA Astrophysics Data System (ADS)
Parker, Lynne E.
1995-09-01
An important issue that arises in the automation of many large-scale surveillance and reconnaissance tasks is that of tracking the movements of (or maintaining passive contact with) objects navigating in a bounded area of interest. Oftentimes in these problems, the area to be monitored will move over time or will not permit fixed sensors, thus requiring a team of mobile sensors--or robots--to monitor the area collectively. In these situations, the robots must not only have mechanisms for determining how to track objects and how to fuse information from neighboring robots, but they must also have distributed control strategies for ensuring that the entire area of interest is continually covered to the greatest extent possible. This paper focuses on the distributed control issue by describing a proposed decentralized control mechanism that allows a team of robots to collectively track and monitor objects in an uncluttered area of interest. The approach is based upon an extension to the ALLIANCE behavior-based architecture that generalizes from the domain of loosely-coupled, independent applications to the domain of strongly cooperative applications, in which the action selection of a robot is dependent upon the actions selected by its teammates. We conclude the paper be describing our ongoing implementation of the proposed approach on a team of four mobile robots.
Moving object detection using dynamic motion modelling from UAV aerial images.
Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid
2014-01-01
Motion analysis based moving object detection from UAV aerial image is still an unsolved issue due to inconsideration of proper motion estimation. Existing moving object detection approaches from UAV aerial images did not deal with motion based pixel intensity measurement to detect moving object robustly. Besides current research on moving object detection from UAV aerial images mostly depends on either frame difference or segmentation approach separately. There are two main purposes for this research: firstly to develop a new motion model called DMM (dynamic motion model) and secondly to apply the proposed segmentation approach SUED (segmentation using edge based dilation) using frame difference embedded together with DMM model. The proposed DMM model provides effective search windows based on the highest pixel intensity to segment only specific area for moving object rather than searching the whole area of the frame using SUED. At each stage of the proposed scheme, experimental fusion of the DMM and SUED produces extracted moving objects faithfully. Experimental result reveals that the proposed DMM and SUED have successfully demonstrated the validity of the proposed methodology.
Forward and backward uncertainty propagation: an oxidation ditch modelling example.
Abusam, A; Keesman, K J; van Straten, G
2003-01-01
In the field of water technology, forward uncertainty propagation is frequently used, whereas backward uncertainty propagation is rarely used. In forward uncertainty analysis, one moves from a given (or assumed) parameter subspace towards the corresponding distribution of the output or objective function. However, in the backward uncertainty propagation, one moves in the reverse direction, from the distribution function towards the parameter subspace. Backward uncertainty propagation, which is a generalisation of parameter estimation error analysis, gives information essential for designing experimental or monitoring programmes, and for tighter bounding of parameter uncertainty intervals. The procedure of carrying out backward uncertainty propagation is illustrated in this technical note by working example for an oxidation ditch wastewater treatment plant. Results obtained have demonstrated that essential information can be achieved by carrying out backward uncertainty propagation analysis.
Anastasopoulou, Panagiota; Tubic, Mirnes; Schmidt, Steffen; Neumann, Rainer; Woll, Alexander; Härtel, Sascha
2014-01-01
The measurement of activity energy expenditure (AEE) via accelerometry is the most commonly used objective method for assessing human daily physical activity and has gained increasing importance in the medical, sports and psychological science research in recent years. The purpose of this study was to determine which of the following procedures is more accurate to determine the energy cost during the most common everyday life activities; a single regression or an activity based approach. For this we used a device that utilizes single regression models (GT3X, ActiGraph Manufacturing Technology Inc., FL., USA) and a device using activity-dependent calculation models (move II, movisens GmbH, Karlsruhe, Germany). Nineteen adults (11 male, 8 female; 30.4±9.0 years) wore the activity monitors attached to the waist and a portable indirect calorimeter (IC) as reference measure for AEE while performing several typical daily activities. The accuracy of the two devices for estimating AEE was assessed as the mean differences between their output and the reference and evaluated using Bland-Altman analysis. The GT3X overestimated the AEE of walking (GT3X minus reference, 1.26 kcal/min), walking fast (1.72 kcal/min), walking up-/downhill (1.45 kcal/min) and walking upstairs (1.92 kcal/min) and underestimated the AEE of jogging (-1.30 kcal/min) and walking upstairs (-2.46 kcal/min). The errors for move II were smaller than those for GT3X for all activities. The move II overestimated AEE of walking (move II minus reference, 0.21 kcal/min), walking up-/downhill (0.06 kcal/min) and stair walking (upstairs: 0.13 kcal/min; downstairs: 0.29 kcal/min) and underestimated AEE of walking fast (-0.11 kcal/min) and jogging (-0.93 kcal/min). Our data suggest that the activity monitor using activity-dependent calculation models is more appropriate for predicting AEE in daily life than the activity monitor using a single regression model.
Dokka, Kalpana; DeAngelis, Gregory C.
2015-01-01
Humans and animals are fairly accurate in judging their direction of self-motion (i.e., heading) from optic flow when moving through a stationary environment. However, an object moving independently in the world alters the optic flow field and may bias heading perception if the visual system cannot dissociate object motion from self-motion. We investigated whether adding vestibular self-motion signals to optic flow enhances the accuracy of heading judgments in the presence of a moving object. Macaque monkeys were trained to report their heading (leftward or rightward relative to straight-forward) when self-motion was specified by vestibular, visual, or combined visual-vestibular signals, while viewing a display in which an object moved independently in the (virtual) world. The moving object induced significant biases in perceived heading when self-motion was signaled by either visual or vestibular cues alone. However, this bias was greatly reduced when visual and vestibular cues together signaled self-motion. In addition, multisensory heading discrimination thresholds measured in the presence of a moving object were largely consistent with the predictions of an optimal cue integration strategy. These findings demonstrate that multisensory cues facilitate the perceptual dissociation of self-motion and object motion, consistent with computational work that suggests that an appropriate decoding of multisensory visual-vestibular neurons can estimate heading while discounting the effects of object motion. SIGNIFICANCE STATEMENT Objects that move independently in the world alter the optic flow field and can induce errors in perceiving the direction of self-motion (heading). We show that adding vestibular (inertial) self-motion signals to optic flow almost completely eliminates the errors in perceived heading induced by an independently moving object. Furthermore, this increased accuracy occurs without a substantial loss in the precision. Our results thus demonstrate that vestibular signals play a critical role in dissociating self-motion from object motion. PMID:26446214
ERIC Educational Resources Information Center
Damonte, Kathleen
2004-01-01
One thing scientists study is how objects move. A famous scientist named Sir Isaac Newton (1642-1727) spent a lot of time observing objects in motion and came up with three laws that describe how things move. This explanation only deals with the first of his three laws of motion. Newton's First Law of Motion says that moving objects will continue…
Drew, Trafton; Horowitz, Todd S.; Wolfe, Jeremy M.; Vogel, Edward K.
2015-01-01
In the attentive tracking task, observers track multiple objects as they move independently and unpredictably among visually identical distractors. Although a number of models of attentive tracking implicate visual working memory as the mechanism responsible for representing target locations, no study has ever directly compared the neural mechanisms of the two tasks. In the current set of experiments, we used electrophysiological recordings to delineate similarities and differences between the neural processing involved in working memory and attentive tracking. We found that the contralateral electrophysiological response to the two tasks was similarly sensitive to the number of items attended in both tasks but that there was also a unique contralateral negativity related to the process of monitoring target position during tracking. This signal was absent for periods of time during tracking tasks when objects briefly stopped moving. These results provide evidence that, during attentive tracking, the process of tracking target locations elicits an electrophysiological response that is distinct and dissociable from neural measures of the number of items being attended. PMID:21228175
Volumetric Security Alarm Based on a Spherical Ultrasonic Transducer Array
NASA Astrophysics Data System (ADS)
Sayin, Umut; Scaini, Davide; Arteaga, Daniel
Most of the existent alarm systems depend on physical or visual contact. The detection area is often limited depending on the type of the transducer, creating blind spots. Our proposition is a truly volumetric alarm system that can detect any movement in the intrusion area, based on monitoring the change over time of the impulse response of the room, which acts as an acoustic footprint. The device depends on an omnidirectional ultrasonic transducer array emitting sweep signals to calculate the impulse response in short intervals. Any change in the room conditions is monitored through a correlation function. The sensitivity of the alarm to different objects and different environments depends on the sweep duration, sweep bandwidth, and sweep interval. Successful detection of intrusions also depends on the size of the monitoring area and requires an adjustment of emitted ultrasound power. Strong air flow affects the performance of the alarm. A method for separating moving objects from strong air flow is devised using an adaptive thresholding on the correlation function involving a series of impulse response measurements. The alarm system can be also used for fire detection since air flow sourced from heating objects differ from random nature of the present air flow. Several measurements are made to test the integrity of the alarm in rooms sizing from 834-2080m3 with irregular geometries and various objects. The proposed system can efficiently detect intrusion whilst adequate emitting power is provided.
CCD Camera Lens Interface for Real-Time Theodolite Alignment
NASA Technical Reports Server (NTRS)
Wake, Shane; Scott, V. Stanley, III
2012-01-01
Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.
NASA Astrophysics Data System (ADS)
Dwi Nugroho, Kreshna; Pebrianto, Singgih; Arif Fatoni, Muhammad; Fatikhunnada, Alvin; Liyantono; Setiawan, Yudi
2017-01-01
Information on the area and spatial distribution of paddy field are needed to support sustainable agricultural and food security program. Mapping or distribution of cropping pattern paddy field is important to obtain sustainability paddy field area. It can be done by direct observation and remote sensing method. This paper discusses remote sensing for paddy field monitoring based on MODIS time series data. In time series MODIS data, difficult to direct classified of data, because of temporal noise. Therefore wavelet transform and moving average are needed as filter methods. The Objective of this study is to recognize paddy cropping pattern with wavelet transform and moving average in West Java using MODIS imagery (MOD13Q1) from 2001 to 2015 then compared between both of methods. The result showed the spatial distribution almost have the same cropping pattern. The accuracy of wavelet transform (75.5%) is higher than moving average (70.5%). Both methods showed that the majority of the cropping pattern in West Java have pattern paddy-fallow-paddy-fallow with various time planting. The difference of the planting schedule was occurs caused by the availability of irrigation water.
The Pop out of Scene-Relative Object Movement against Retinal Motion Due to Self-Movement
ERIC Educational Resources Information Center
Rushton, Simon K.; Bradshaw, Mark F.; Warren, Paul A.
2007-01-01
An object that moves is spotted almost effortlessly; it "pops out." When the observer is stationary, a moving object is uniquely identified by retinal motion. This is not so when the observer is also moving; as the eye travels through space all scene objects change position relative to the eye producing a complicated field of retinal motion.…
Wireless patient monitoring system for a moving-actuator type artificial heart.
Nam, K W; Chung, J; Choi, S W; Sun, K; Min, B G
2006-10-01
In this study, we developed a wireless monitoring system for outpatients equipped with a moving-actuator type pulsatile bi-ventricular assist device, AnyHeart. The developed monitoring system consists of two parts; a Bluetooth-based short-distance self-monitoring system that can monitor and control the operating status of a VAD using a Bluetooth-embedded personal digital assistant or a personal computer within a distance of 10 meters, and a cellular network-based remote monitoring system that can continuously monitor and control the operating status of AnyHeart at any location. Results of in vitro experiments demonstrate the developed system's ability to monitor the operational status of an implanted AnyHeart.
Fokkenrood, H J P; Verhofstad, N; van den Houten, M M L; Lauret, G J; Wittens, C; Scheltinga, M R M; Teijink, J A W
2014-08-01
The daily life physical activity (PA) of patients with peripheral arterial disease (PAD) may be severely hampered by intermittent claudication (IC). From a therapeutic, as well as research, point of view, it may be more relevant to determine improvement in PA as an outcome measure in IC. The aim of this study was to validate daily activities using a novel type of tri-axial accelerometer (Dynaport MoveMonitor) in patients with IC. Patients with IC were studied during a hospital visit. Standard activities (locomotion, lying, sitting, standing, shuffling, number of steps and "not worn" detection) were video recorded and compared with activities scored by the MoveMonitor. Inter-rater reliability (expressed in intraclass correlation coefficients [ICC]), sensitivity, specificity, and positive predictive values (PPV) were calculated for each activity. Twenty-eight hours of video observation were analysed (n = 21). Our video annotation method (the gold standard method) appeared to be accurate for most postures (ICC > 0.97), except for shuffling (ICC = 0.38). The MoveMonitor showed a high sensitivity (>86%), specificity (>91%), and PPV (>88%) for locomotion, lying, sitting, and "not worn" detection. Moderate accuracy was found for standing (46%), while shuffling appeared to be undetectable (18%). A strong correlation was found between video recordings and the MoveMonitor with regard to the calculation of the "number of steps" (ICC = 0.90). The MoveMonitor provides accurate information on a diverse set of postures, daily activities, and number of steps in IC patients. However, the detection of low amplitude movements, such as shuffling and "sitting to standing" transfers, is a matter of concern. This tool is useful in assessing the role of PA as a novel, clinically relevant outcome parameter in IC. Copyright © 2014 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Acoustic facilitation of object movement detection during self-motion
Calabro, F. J.; Soto-Faraco, S.; Vaina, L. M.
2011-01-01
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations. PMID:21307050
Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects
Lu, S.Y.
1998-12-22
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.
Self-motion impairs multiple-object tracking.
Thomas, Laura E; Seiffert, Adriane E
2010-10-01
Investigations of multiple-object tracking aim to further our understanding of how people perform common activities such as driving in traffic. However, tracking tasks in the laboratory have overlooked a crucial component of much real-world object tracking: self-motion. We investigated the hypothesis that keeping track of one's own movement impairs the ability to keep track of other moving objects. Participants attempted to track multiple targets while either moving around the tracking area or remaining in a fixed location. Participants' tracking performance was impaired when they moved to a new location during tracking, even when they were passively moved and when they did not see a shift in viewpoint. Self-motion impaired multiple-object tracking in both an immersive virtual environment and a real-world analog, but did not interfere with a difficult non-spatial tracking task. These results suggest that people use a common mechanism to track changes both to the location of moving objects around them and to keep track of their own location. Copyright 2010 Elsevier B.V. All rights reserved.
McMahon, Terry W; Newman, David G
2016-04-01
Flying a helicopter is a complex psychomotor skill. Fatigue is a serious threat to operational safety, particularly for sustained helicopter operations involving high levels of cognitive information processing and sustained time on task. As part of ongoing research into this issue, the object of this study was to develop a field-deployable helicopter-specific psychomotor vigilance test (PVT) for the purpose of daily performance monitoring of pilots. The PVT consists of a laptop computer, a hand-operated joystick, and a set of rudder pedals. Screen-based compensatory tracking task software includes a tracking ball (operated by the joystick) which moves randomly in all directions, and a second tracking ball which moves horizontally (operated by the rudder pedals). The 5-min test requires the pilot to keep both tracking balls centered. This helicopter-specific PVT's portability and integrated data acquisition and storage system enables daily field monitoring of the performance of individual helicopter pilots. The inclusion of a simultaneous foot-operated tracking task ensures divided attention for helicopter pilots as the movement of both tracking balls requires simultaneous inputs. This PVT is quick, economical, easy to use, and specific to the operational flying task. It can be used for performance monitoring purposes, and as a general research tool for investigating the psychomotor demands of helicopter operations. While reliability and validity testing is warranted, data acquired from this test could help further our understanding of the effect of various factors (such as fatigue) on helicopter pilot performance, with the potential of contributing to helicopter operational safety.
Evaluation of a 6-wire thermocouple psychrometer for determination of in-situ water potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loskot, C.L.; Rousseau, J.P.; Kurzmack, M.A.
1994-12-31
The US Geological Survey has been conducting investigations at Yucca Mountain, Nevada, to provide information about the hydrologic and geologic suitability of this site for storing high-level nuclear wastes in an underground mined repository. Test drilling and instrumentation are a principal method of investigation. The main objectives of the deep unsaturated-zone testhole program are: (1) to determine the flux of water moving through the unsaturated welded and nonwelded tuff units, (2) to determine the vertical and lateral distribution of moisture content, water potential, and other important geohydrologic characteristics in the rock units penetrated, and (3) to monitor stability and changesmore » in in-situ fluid potentials with time. Thermocouple psychrometers will be used to monitor in-situ water potentials.« less
Effects of a Moving Distractor Object on Time-to-Contact Judgments
ERIC Educational Resources Information Center
Oberfeld, Daniel; Hecht, Heiko
2008-01-01
The effects of moving task-irrelevant objects on time-to-contact (TTC) judgments were examined in 5 experiments. Observers viewed a directly approaching target in the presence of a distractor object moving in parallel with the target. In Experiments 1 to 4, observers decided whether the target would have collided with them earlier or later than a…
Remote sensing using MIMO systems
Bikhazi, Nicolas; Young, William F; Nguyen, Hung D
2015-04-28
A technique for sensing a moving object within a physical environment using a MIMO communication link includes generating a channel matrix based upon channel state information of the MIMO communication link. The physical environment operates as a communication medium through which communication signals of the MIMO communication link propagate between a transmitter and a receiver. A spatial information variable is generated for the MIMO communication link based on the channel matrix. The spatial information variable includes spatial information about the moving object within the physical environment. A signature for the moving object is generated based on values of the spatial information variable accumulated over time. The moving object is identified based upon the signature.
Perceptual impressions of causality are affected by common fate.
White, Peter A
2017-03-24
Many studies of perceptual impressions of causality have used a stimulus in which a moving object (the launcher) contacts a stationary object (the target) and the latter then moves off. Such stimuli give rise to an impression that the launcher makes the target move. In the present experiments, instead of a single target object, an array of four vertically aligned objects was used. The launcher contacted none of them, but stopped at a point between the two central objects. The four objects then moved with similar motion properties, exhibiting the Gestalt property of common fate. Strong impressions of causality were reported for this stimulus. It is argued that the array of four objects was perceived, by the likelihood principle, as a single object with some parts unseen, that the launcher was perceived as contacting one of the unseen parts of this object, and that the causal impression resulted from that. Supporting that argument, stimuli in which kinematic features were manipulated so as to weaken or eliminate common fate yielded weaker impressions of causality.
Moving vehicles segmentation based on Gaussian motion model
NASA Astrophysics Data System (ADS)
Zhang, Wei; Fang, Xiang Z.; Lin, Wei Y.
2005-07-01
Moving objects segmentation is a challenge in computer vision. This paper focuses on the segmentation of moving vehicles in dynamic scene. We analyses the psychology of human vision and present a framework for segmenting moving vehicles in the highway. The proposed framework consists of two parts. Firstly, we propose an adaptive background update method in which the background is updated according to the change of illumination conditions and thus can adapt to the change of illumination sensitively. Secondly, we construct a Gaussian motion model to segment moving vehicles, in which the motion vectors of the moving pixels are modeled as a Gaussian model and an on-line EM algorithm is used to update the model. The Gaussian distribution of the adaptive model is elevated to determine which moving vectors result from moving vehicles and which from other moving objects such as waving trees. Finally, the pixels with motion vector result from the moving vehicles are segmented. Experimental results of several typical scenes show that the proposed model can detect the moving vehicles correctly and is immune from influence of the moving objects caused by the waving trees and the vibration of camera.
Vehicle Counting and Moving Direction Identification Based on Small-Aperture Microphone Array.
Zu, Xingshui; Zhang, Shaojie; Guo, Feng; Zhao, Qin; Zhang, Xin; You, Xing; Liu, Huawei; Li, Baoqing; Yuan, Xiaobing
2017-05-10
The varying trend of a moving vehicle's angles provides much important intelligence for an unattended ground sensor (UGS) monitoring system. The present study investigates the capabilities of a small-aperture microphone array (SAMA) based system to identify the number and moving direction of vehicles travelling on a previously established route. In this paper, a SAMA-based acoustic monitoring system, including the system hardware architecture and algorithm mechanism, is designed as a single node sensor for the application of UGS. The algorithm is built on the varying trend of a vehicle's bearing angles around the closest point of approach (CPA). We demonstrate the effectiveness of our proposed method with our designed SAMA-based monitoring system in various experimental sites. The experimental results in harsh conditions validate the usefulness of our proposed UGS monitoring system.
Moving Object Localization Based on UHF RFID Phase and Laser Clustering
Fu, Yulu; Wang, Changlong; Liang, Gaoli; Zhang, Hua; Ur Rehman, Shafiq
2018-01-01
RFID (Radio Frequency Identification) offers a way to identify objects without any contact. However, positioning accuracy is limited since RFID neither provides distance nor bearing information about the tag. This paper proposes a new and innovative approach for the localization of moving object using a particle filter by incorporating RFID phase and laser-based clustering from 2d laser range data. First of all, we calculate phase-based velocity of the moving object based on RFID phase difference. Meanwhile, we separate laser range data into different clusters, and compute the distance-based velocity and moving direction of these clusters. We then compute and analyze the similarity between two velocities, and select K clusters having the best similarity score. We predict the particles according to the velocity and moving direction of laser clusters. Finally, we update the weights of the particles based on K clusters and achieve the localization of moving objects. The feasibility of this approach is validated on a Scitos G5 service robot and the results prove that we have successfully achieved a localization accuracy up to 0.25 m. PMID:29522458
2003-01-22
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
Understanding Visible Perception
NASA Technical Reports Server (NTRS)
2003-01-01
One concern about human adaptation to space is how returning from the microgravity of orbit to Earth can affect an astronaut's ability to fly safely. There are monitors and infrared video cameras to measure eye movements without having to affect the crew member. A computer screen provides moving images which the eye tracks while the brain determines what it is seeing. A video camera records movement of the subject's eyes. Researchers can then correlate perception and response. Test subjects perceive different images when a moving object is covered by a mask that is visible or invisible (above). Early results challenge the accepted theory that smooth pursuit -- the fluid eye movement that humans and primates have -- does not involve the higher brain. NASA results show that: Eye movement can predict human perceptual performance, smooth pursuit and saccadic (quick or ballistic) movement share some signal pathways, and common factors can make both smooth pursuit and visual perception produce errors in motor responses.
Research on measurement method of optical camouflage effect of moving object
NASA Astrophysics Data System (ADS)
Wang, Juntang; Xu, Weidong; Qu, Yang; Cui, Guangzhen
2016-10-01
Camouflage effectiveness measurement as an important part of the camouflage technology, which testing and measuring the camouflage effect of the target and the performance of the camouflage equipment according to the tactical and technical requirements. The camouflage effectiveness measurement of current optical band is mainly aimed at the static target which could not objectively reflect the dynamic camouflage effect of the moving target. This paper synthetical used technology of dynamic object detection and camouflage effect detection, the digital camouflage of the moving object as the research object, the adaptive background update algorithm of Surendra was improved, a method of optical camouflage effect detection using Lab-color space in the detection of moving-object was presented. The binary image of moving object is extracted by this measurement technology, in the sequence diagram, the characteristic parameters such as the degree of dispersion, eccentricity, complexity and moment invariants are constructed to construct the feature vector space. The Euclidean distance of moving target which through digital camouflage was calculated, the results show that the average Euclidean distance of 375 frames was 189.45, which indicated that the degree of dispersion, eccentricity, complexity and moment invariants of the digital camouflage graphics has a great difference with the moving target which not spray digital camouflage. The measurement results showed that the camouflage effect was good. Meanwhile with the performance evaluation module, the correlation coefficient of the dynamic target image range 0.1275 from 0.0035, and presented some ups and down. Under the dynamic condition, the adaptability of target and background was reflected. In view of the existing infrared camouflage technology, the next step, we want to carry out the camouflage effect measurement technology of the moving target based on infrared band.
Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Lu, Shin-Yee
1998-01-01
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.
Decoupled tracking and thermal monitoring of non-stationary targets.
Tan, Kok Kiong; Zhang, Yi; Huang, Sunan; Wong, Yoke San; Lee, Tong Heng
2009-10-01
Fault diagnosis and predictive maintenance address pertinent economic issues relating to production systems as an efficient technique can continuously monitor key health parameters and trigger alerts when critical changes in these variables are detected, before they lead to system failures and production shutdowns. In this paper, we present a decoupled tracking and thermal monitoring system which can be used on non-stationary targets of closed systems such as machine tools. There are three main contributions from the paper. First, a vision component is developed to track moving targets under a monitor. Image processing techniques are used to resolve the target location to be tracked. Thus, the system is decoupled and applicable to closed systems without the need for a physical integration. Second, an infrared temperature sensor with a built-in laser for locating the measurement spot is deployed for non-contact temperature measurement of the moving target. Third, a predictive motion control system holds the thermal sensor and follows the moving target efficiently to enable continuous temperature measurement and monitoring.
Flow detection via sparse frame analysis for suspicious event recognition in infrared imagery
NASA Astrophysics Data System (ADS)
Fernandes, Henrique C.; Batista, Marcos A.; Barcelos, Celia A. Z.; Maldague, Xavier P. V.
2013-05-01
It is becoming increasingly evident that intelligent systems are very bene¯cial for society and that the further development of such systems is necessary to continue to improve society's quality of life. One area that has drawn the attention of recent research is the development of automatic surveillance systems. In our work we outline a system capable of monitoring an uncontrolled area (an outside parking lot) using infrared imagery and recognizing suspicious events in this area. The ¯rst step is to identify moving objects and segment them from the scene's background. Our approach is based on a dynamic background-subtraction technique which robustly adapts detection to illumination changes. It is analyzed only regions where movement is occurring, ignoring in°uence of pixels from regions where there is no movement, to segment moving objects. Regions where movement is occurring are identi¯ed using °ow detection via sparse frame analysis. During the tracking process the objects are classi¯ed into two categories: Persons and Vehicles, based on features such as size and velocity. The last step is to recognize suspicious events that may occur in the scene. Since the objects are correctly segmented and classi¯ed it is possible to identify those events using features such as velocity and time spent motionless in one spot. In this paper we recognize the suspicious event suspicion of object(s) theft from inside a parked vehicle at spot X by a person" and results show that the use of °ow detection increases the recognition of this suspicious event from 78:57% to 92:85%.
Cancer diagnosis using a conventional x-ray fluorescence camera with a cadmium-telluride detector
NASA Astrophysics Data System (ADS)
Sato, Eiichi; Enomoto, Toshiyuki; Hagiwara, Osahiko; Abudurexiti, Abulajiang; Sato, Koetsu; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun
2011-10-01
X-ray fluorescence (XRF) analysis is useful for mapping various atoms in objects. Bremsstrahlung X-rays are selected using a 3.0 mm-thick aluminum filter, and these rays are absorbed by indium, cerium and gadolinium atoms in objects. Then XRF is produced from the objects, and photons are detected by a cadmium-telluride detector. The Kα photons are discriminated using a multichannel analyzer, and the number of photons is counted by a counter card. The objects are moved and scanned by an x-y stage in conjunction with a two-stage controller, and X-ray images obtained by atomic mapping are shown on a personal computer monitor. The scan steps of the x and y axes were both 2.5 mm, and the photon-counting time per mapping point was 0.5 s. We carried out atomic mapping using the X-ray camera, and Kα photons from cerium and gadolinium atoms were produced from cancerous regions in nude mice.
NASA Astrophysics Data System (ADS)
Enomoto, Toshiyuki; Sato, Eiichi; Abderyim, Purkhet; Abudurexiti, Abulajiang; Hagiwara, Osahiko; Matsukiyo, Hiroshi; Osawa, Akihiro; Watanabe, Manabu; Nagao, Jiro; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun
2011-04-01
X-ray fluorescence (XRF) analysis is useful for mapping various molecules in objects. Bremsstrahlung X-rays are selected using a 3.0-mm-thick aluminum filter, and these rays are absorbed by iodine, cerium, and gadolinium molecules in objects. Next, XRF is produced from the objects, and photons are detected by a cadmium-telluride detector. The Kα photons are discriminated using a multichannel analyzer, and the number of photons is counted by a counter card. The objects are moved and scanned by an x- y stage in conjunction with a two-stage controller, and X-ray images obtained by molecular mapping are shown on a personal computer monitor. The scan steps of x and y axes were both 2.5 mm, and the photon-counting time per mapping point was 0.5 s. We carried out molecular mapping using the X-ray camera, and Kα photons from cerium and gadolinium molecules were produced from cancerous regions in nude mice.
Günther, Philipp; Kuschmierz, Robert; Pfister, Thorsten; Czarske, Jürgen W
2013-05-01
The precise distance measurement of fast-moving rough surfaces is important in several applications such as lathe monitoring. A nonincremental interferometer based on two mutually tilted interference fringe systems has been realized for this task. The distance is coded in the phase difference between the generated interference signals corresponding to the fringe systems. Large tilting angles between the interference fringe systems are necessary for a high sensitivity. However, due to the speckle effect at rough surfaces, different envelopes and phase jumps of the interference signals occur. At large tilting angles, these signals become dissimilar, resulting in a small correlation coefficient and a high measurement uncertainty. Based on a matching of illumination and receiving optics, the correlation coefficient and the phase difference estimation have been improved significantly. For axial displacement measurements of recurring rough surfaces, laterally moving with velocities of 5 m/s, an uncertainty of 110 nm has been attained. For nonrecurring surfaces, a distance measurement uncertainty of 830 nm has been achieved. Incorporating the additionally measured lateral velocity and the rotational speed, the two-dimensional shape of rotating objects results. Since the measurement uncertainty of the displacement, distance, and shape is nearly independent of the lateral surface velocity, this technique is predestined for fast-rotating objects, such as crankshafts, camshafts, vacuum pump shafts, or turning parts of lathes.
Real-time people counting system using a single video camera
NASA Astrophysics Data System (ADS)
Lefloch, Damien; Cheikh, Faouzi A.; Hardeberg, Jon Y.; Gouton, Pierre; Picot-Clemente, Romain
2008-02-01
There is growing interest in video-based solutions for people monitoring and counting in business and security applications. Compared to classic sensor-based solutions the video-based ones allow for more versatile functionalities, improved performance with lower costs. In this paper, we propose a real-time system for people counting based on single low-end non-calibrated video camera. The two main challenges addressed in this paper are: robust estimation of the scene background and the number of real persons in merge-split scenarios. The latter is likely to occur whenever multiple persons move closely, e.g. in shopping centers. Several persons may be considered to be a single person by automatic segmentation algorithms, due to occlusions or shadows, leading to under-counting. Therefore, to account for noises, illumination and static objects changes, a background substraction is performed using an adaptive background model (updated over time based on motion information) and automatic thresholding. Furthermore, post-processing of the segmentation results is performed, in the HSV color space, to remove shadows. Moving objects are tracked using an adaptive Kalman filter, allowing a robust estimation of the objects future positions even under heavy occlusion. The system is implemented in Matlab, and gives encouraging results even at high frame rates. Experimental results obtained based on the PETS2006 datasets are presented at the end of the paper.
The temporal dynamics of heading perception in the presence of moving objects
Fajen, Brett R.
2015-01-01
Many forms of locomotion rely on the ability to accurately perceive one's direction of locomotion (i.e., heading) based on optic flow. Although accurate in rigid environments, heading judgments may be biased when independently moving objects are present. The aim of this study was to systematically investigate the conditions in which moving objects influence heading perception, with a focus on the temporal dynamics and the mechanisms underlying this bias. Subjects viewed stimuli simulating linear self-motion in the presence of a moving object and judged their direction of heading. Experiments 1 and 2 revealed that heading perception is biased when the object crosses or almost crosses the observer's future path toward the end of the trial, but not when the object crosses earlier in the trial. Nonetheless, heading perception is not based entirely on the instantaneous optic flow toward the end of the trial. This was demonstrated in Experiment 3 by varying the portion of the earlier part of the trial leading up to the last frame that was presented to subjects. When the stimulus duration was long enough to include the part of the trial before the moving object crossed the observer's path, heading judgments were less biased. The findings suggest that heading perception is affected by the temporal evolution of optic flow. The time course of dorsal medial superior temporal area (MSTd) neuron responses may play a crucial role in perceiving heading in the presence of moving objects, a property not captured by many existing models. PMID:26510765
Modeling and query the uncertainty of network constrained moving objects based on RFID data
NASA Astrophysics Data System (ADS)
Han, Liang; Xie, Kunqing; Ma, Xiujun; Song, Guojie
2007-06-01
The management of network constrained moving objects is more and more practical, especially in intelligent transportation system. In the past, the location information of moving objects on network is collected by GPS, which cost high and has the problem of frequent update and privacy. The RFID (Radio Frequency IDentification) devices are used more and more widely to collect the location information. They are cheaper and have less update. And they interfere in the privacy less. They detect the id of the object and the time when moving object passed by the node of the network. They don't detect the objects' exact movement in side the edge, which lead to a problem of uncertainty. How to modeling and query the uncertainty of the network constrained moving objects based on RFID data becomes a research issue. In this paper, a model is proposed to describe the uncertainty of network constrained moving objects. A two level index is presented to provide efficient access to the network and the data of movement. The processing of imprecise time-slice query and spatio-temporal range query are studied in this paper. The processing includes four steps: spatial filter, spatial refinement, temporal filter and probability calculation. Finally, some experiments are done based on the simulated data. In the experiments the performance of the index is studied. The precision and recall of the result set are defined. And how the query arguments affect the precision and recall of the result set is also discussed.
Robot environment expert system
NASA Technical Reports Server (NTRS)
Potter, J. L.
1985-01-01
The Robot Environment Expert System uses a hexidecimal tree data structure to model a complex robot environment where not only the robot arm moves, but also the robot itself and other objects may move. The hextree model allows dynamic updating, collision avoidance and path planning over time, to avoid moving objects.
Digital Image Correlation for Performance Monitoring
NASA Technical Reports Server (NTRS)
Palaviccini, Miguel; Turner, Dan; Herzberg, Michael
2016-01-01
Evaluating the health of a mechanism requires more than just a binary evaluation of whether an operation was completed. It requires analyzing more comprehensive, full-field data. Health monitoring is a process of non-destructively identifying characteristics that indicate the fitness of an engineered component. In order to monitor unit health in a production setting, an automated test system must be created to capture the motion of mechanism parts in a real-time and non-intrusive manner. One way to accomplish this is by using high-speed video and Digital Image Correlation (DIC). In this approach, individual frames of the video are analyzed to track the motion of mechanism components. The derived performance metrics allow for state-of-health monitoring and improved fidelity of mechanism modeling. The results are in-situ state-of-health identification and performance prediction. This paper introduces basic concepts of this test method, and discusses two main themes: the use of laser marking to add fiducial patterns to mechanism components, and new software developed to track objects with complex shapes, even as they move behind obstructions. Finally, the implementation of these tests into an automated tester is discussed.
Real-time object detection, tracking and occlusion reasoning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Divakaran, Ajay; Yu, Qian; Tamrakar, Amir
A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.
Haga, Yoshihiro; Chida, Koichi; Inaba, Yohei; Kaga, Yuji; Meguro, Taiichiro; Zuguchi, Masayuki
2016-02-01
As the use of diagnostic X-ray equipment with flat panel detectors (FPDs) has increased, so has the importance of proper management of FPD systems. To ensure quality control (QC) of FPD system, an easy method for evaluating FPD imaging performance for both stationary and moving objects is required. Until now, simple rotatable QC phantoms have not been available for the easy evaluation of the performance (spatial resolution and dynamic range) of FPD in imaging moving objects. We developed a QC phantom for this purpose. It consists of three thicknesses of copper and a rotatable test pattern of piano wires of various diameters. Initial tests confirmed its stable performance. Our moving phantom is very useful for QC of FPD images of moving objects because it enables visual evaluation of image performance (spatial resolution and dynamic range) easily.
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1991-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem has been incorporated in the framework of an in-line motion planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning the deterministic problem, where the information about the objects is assumed to be certain is examined. If instead of the Euclidean norm, L(sub 1) or L(sub infinity) norms are used to represent distance, the problem becomes a linear programming problem. The stochastic problem is formulated, where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: (1) filtering of the minimum distance between the robot and the moving object, at the present time; and (2) prediction of the minimum distance in the future, in order to predict possible collisions with the moving obstacles and estimate the collision time.
ERIC Educational Resources Information Center
Kemp, Andrew
2005-01-01
Everything moves. Even apparently stationary objects such as houses, roads, or mountains are moving because they sit on a spinning planet orbiting the Sun. Not surprisingly, the concepts of motion and the forces that affect moving objects are an integral part of the middle school science curriculum. However, middle school students are often taught…
ERIC Educational Resources Information Center
Houlrik, Jens Madsen
2009-01-01
The Lorentz transformation applies directly to the kinematics of moving particles viewed as geometric points. Wave propagation, on the other hand, involves moving planes which are extended objects defined by simultaneity. By treating a plane wave as a geometric object moving at the phase velocity, novel results are obtained that illustrate the…
Massive photometry of low-altitude artificial satellites on Mini-Mega-TORTORA
NASA Astrophysics Data System (ADS)
Karpov, S.; Katkova, E.; Beskin, G.; Biryukov, A.; Bondar, S.; Davydov, E.; Ivanov, E.; Perkov, A.; Sasyuk, V.
2016-12-01
The nine-channel Mini-Mega-TORTORA (MMT-9) optical wide-field monitoring system with high temporal resolution system is in operation since June 2014. The system has 0.1 s temporal resolution and effective detection limit around 10 mag (calibrated to V filter) for fast-moving objects on this timescale. In addition to its primary scientific operation, the system detects 200-500 tracks of satellites every night, both on low-altitude and high ellipticity orbits. Using these data we created and support the public database of photometric characteristics for these satellites, available online.
Real Objects Can Impede Conditional Reasoning but Augmented Objects Do Not.
Sato, Yuri; Sugimoto, Yutaro; Ueda, Kazuhiro
2018-03-01
In this study, Knauff and Johnson-Laird's (2002) visual impedance hypothesis (i.e., mental representations with irrelevant visual detail can impede reasoning) is applied to the domain of external representations and diagrammatic reasoning. We show that the use of real objects and augmented real (AR) objects can control human interpretation and reasoning about conditionals. As participants made inferences (e.g., an invalid one from "if P then Q" to "P"), they also moved objects corresponding to premises. Participants who moved real objects made more invalid inferences than those who moved AR objects and those who did not manipulate objects (there was no significant difference between the last two groups). Our results showed that real objects impeded conditional reasoning, but AR objects did not. These findings are explained by the fact that real objects may over-specify a single state that exists, while AR objects suggest multiple possibilities. Copyright © 2017 Cognitive Science Society, Inc.
Shadow detection of moving objects based on multisource information in Internet of things
NASA Astrophysics Data System (ADS)
Ma, Zhen; Zhang, De-gan; Chen, Jie; Hou, Yue-xian
2017-05-01
Moving object detection is an important part in intelligent video surveillance under the banner of Internet of things. The detection of moving target's shadow is also an important step in moving object detection. On the accuracy of shadow detection will affect the detection results of the object directly. Based on the variety of shadow detection method, we find that only using one feature can't make the result of detection accurately. Then we present a new method for shadow detection which contains colour information, the invariance of optical and texture feature. Through the comprehensive analysis of the detecting results of three kinds of information, the shadow was effectively determined. It gets ideal effect in the experiment when combining advantages of various methods.
NASA Astrophysics Data System (ADS)
Li, Yu-Ting; Wickens, Jeffery R.; Huang, Yi-Ling; Pan, Wynn H. T.; Chen, Fu-Yu Beverly; Chen, Jia-Jin Jason
2013-08-01
Objective. Fast-scan cyclic voltammetry (FSCV) is commonly used to monitor phasic dopamine release, which is usually performed using tethered recording and for limited types of animal behavior. It is necessary to design a wireless dopamine sensing system for animal behavior experiments. Approach. This study integrates a wireless FSCV system for monitoring the dopamine signal in the ventral striatum with an electrical stimulator that induces biphasic current to excite dopaminergic neurons in awake freely moving rats. The measured dopamine signals are unidirectionally transmitted from the wireless FSCV module to the host unit. To reduce electrical artifacts, an optocoupler and a separate power are applied to isolate the FSCV system and electrical stimulator, which can be activated by an infrared controller. Main results. In the validation test, the wireless backpack system has similar performance in comparison with a conventional wired system and it does not significantly affect the locomotor activity of the rat. In the cocaine administration test, the maximum electrically elicited dopamine signals increased to around 230% of the initial value 20 min after the injection of 10 mg kg-1 cocaine. In a classical conditioning test, the dopamine signal in response to a cue increased to around 60 nM over 50 successive trials while the electrically evoked dopamine concentration decreased from about 90 to 50 nM in the maintenance phase. In contrast, the cue-evoked dopamine concentration progressively decreased and the electrically evoked dopamine was eliminated during the extinction phase. In the histological evaluation, there was little damage to brain tissue after five months chronic implantation of the stimulating electrode. Significance. We have developed an integrated wireless voltammetry system for measuring dopamine concentration and providing electrical stimulation. The developed wireless FSCV system is proven to be a useful experimental tool for the continuous monitoring of dopamine levels during animal learning behavior studies of freely moving rats.
Wireless acceleration sensor of moving elements for condition monitoring of mechanisms
NASA Astrophysics Data System (ADS)
Sinitsin, Vladimir V.; Shestakov, Aleksandr L.
2017-09-01
Comprehensive analysis of the angular and linear accelerations of moving elements (shafts, gears) allows an increase in the quality of the condition monitoring of mechanisms. However, existing tools and methods measure either linear or angular acceleration with postprocessing. This paper suggests a new construction design of an angular acceleration sensor for moving elements. The sensor is mounted on a moving element and, among other things, the data transfer and electric power supply are carried out wirelessly. In addition, the authors introduce a method for processing the received information which makes it possible to divide the measured acceleration into the angular and linear components. The design has been validated by the results of laboratory tests of an experimental model of the sensor. The study has shown that this method provides a definite separation of the measured acceleration into linear and angular components, even in noise. This research contributes an advance in the range of methods and tools for condition monitoring of mechanisms.
Localization and tracking of moving objects in two-dimensional space by echolocation.
Matsuo, Ikuo
2013-02-01
Bats use frequency-modulated echolocation to identify and capture moving objects in real three-dimensional space. Experimental evidence indicates that bats are capable of locating static objects with a range accuracy of less than 1 μs. A previously introduced model estimates ranges of multiple, static objects using linear frequency modulation (LFM) sound and Gaussian chirplets with a carrier frequency compatible with bat emission sweep rates. The delay time for a single object was estimated with an accuracy of about 1.3 μs by measuring the echo at a low signal-to-noise ratio (SNR). The range accuracy was dependent not only on the SNR but also the Doppler shift, which was dependent on the movements. However, it was unclear whether this model could estimate the moving object range at each timepoint. In this study, echoes were measured from the rotating pole at two receiving points by intermittently emitting LFM sounds. The model was shown to localize moving objects in two-dimensional space by accurately estimating the object's range at each timepoint.
Moving Object Detection on a Vehicle Mounted Back-Up Camera
Kim, Dong-Sun; Kwon, Jinsan
2015-01-01
In the detection of moving objects from vision sources one usually assumes that the scene has been captured by stationary cameras. In case of backing up a vehicle, however, the camera mounted on the vehicle moves according to the vehicle’s movement, resulting in ego-motions on the background. This results in mixed motion in the scene, and makes it difficult to distinguish between the target objects and background motions. Without further treatments on the mixed motion, traditional fixed-viewpoint object detection methods will lead to many false-positive detection results. In this paper, we suggest a procedure to be used with the traditional moving object detection methods relaxing the stationary cameras restriction, by introducing additional steps before and after the detection. We also decribe the implementation as a FPGA platform along with the algorithm. The target application of this suggestion is use with a road vehicle’s rear-view camera systems. PMID:26712761
Wireless inertial measurement of head kinematics in freely-moving rats
Pasquet, Matthieu O.; Tihy, Matthieu; Gourgeon, Aurélie; Pompili, Marco N.; Godsil, Bill P.; Léna, Clément; Dugué, Guillaume P.
2016-01-01
While miniature inertial sensors offer a promising means for precisely detecting, quantifying and classifying animal behaviors, versatile inertial sensing devices adapted for small, freely-moving laboratory animals are still lacking. We developed a standalone and cost-effective platform for performing high-rate wireless inertial measurements of head movements in rats. Our system is designed to enable real-time bidirectional communication between the headborne inertial sensing device and third party systems, which can be used for precise data timestamping and low-latency motion-triggered applications. We illustrate the usefulness of our system in diverse experimental situations. We show that our system can be used for precisely quantifying motor responses evoked by external stimuli, for characterizing head kinematics during normal behavior and for monitoring head posture under normal and pathological conditions obtained using unilateral vestibular lesions. We also introduce and validate a novel method for automatically quantifying behavioral freezing during Pavlovian fear conditioning experiments, which offers superior performance in terms of precision, temporal resolution and efficiency. Thus, this system precisely acquires movement information in freely-moving animals, and can enable objective and quantitative behavioral scoring methods in a wide variety of experimental situations. PMID:27767085
System and method for moving a probe to follow movements of tissue
NASA Technical Reports Server (NTRS)
Feldstein, C.; Andrews, T. W.; Crawford, D. W.; Cole, M. A. (Inventor)
1981-01-01
An apparatus is described for moving a probe that engages moving living tissue such as a heart or an artery that is penetrated by the probe, which moves the probe in synchronism with the tissue to maintain the probe at a constant location with respect to the tissue. The apparatus includes a servo positioner which moves a servo member to maintain a constant distance from a sensed object while applying very little force to the sensed object, and a follower having a stirrup at one end resting on a surface of the living tissue and another end carrying a sensed object adjacent to the servo member. A probe holder has one end mounted on the servo member and another end which holds the probe.
Detecting and Analyzing Multiple Moving Objects in Crowded Environments with Coherent Motion Regions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheriyadat, Anil M.
Understanding the world around us from large-scale video data requires vision systems that can perform automatic interpretation. While human eyes can unconsciously perceive independent objects in crowded scenes and other challenging operating environments, automated systems have difficulty detecting, counting, and understanding their behavior in similar scenes. Computer scientists at ORNL have a developed a technology termed as "Coherent Motion Region Detection" that invloves identifying multiple indepedent moving objects in crowded scenes by aggregating low-level motion cues extracted from moving objects. Humans and other species exploit such low-level motion cues seamlessely to perform perceptual grouping for visual understanding. The algorithm detectsmore » and tracks feature points on moving objects resulting in partial trajectories that span coherent 3D region in the space-time volume defined by the video. In the case of multi-object motion, many possible coherent motion regions can be constructed around the set of trajectories. The unique approach in the algorithm is to identify all possible coherent motion regions, then extract a subset of motion regions based on an innovative measure to automatically locate moving objects in crowded environments.The software reports snapshot of the object, count, and derived statistics ( count over time) from input video streams. The software can directly process videos streamed over the internet or directly from a hardware device (camera).« less
Moving object detection and tracking in videos through turbulent medium
NASA Astrophysics Data System (ADS)
Halder, Kalyan Kumar; Tahtali, Murat; Anavatti, Sreenatha G.
2016-06-01
This paper addresses the problem of identifying and tracking moving objects in a video sequence having a time-varying background. This is a fundamental task in many computer vision applications, though a very challenging one because of turbulence that causes blurring and spatiotemporal movements of the background images. Our proposed approach involves two major steps. First, a moving object detection algorithm that deals with the detection of real motions by separating the turbulence-induced motions using a two-level thresholding technique is used. In the second step, a feature-based generalized regression neural network is applied to track the detected objects throughout the frames in the video sequence. The proposed approach uses the centroid and area features of the moving objects and creates the reference regions instantly by selecting the objects within a circle. Simulation experiments are carried out on several turbulence-degraded video sequences and comparisons with an earlier method confirms that the proposed approach provides a more effective tracking of the targets.
Online phase measuring profilometry for rectilinear moving object by image correction
NASA Astrophysics Data System (ADS)
Yuan, Han; Cao, Yi-Ping; Chen, Chen; Wang, Ya-Pin
2015-11-01
In phase measuring profilometry (PMP), the object must be static for point-to-point reconstruction with the captured deformed patterns. While the object is rectilinearly moving online, the size and pixel position differences of the object in different captured deformed patterns do not meet the point-to-point requirement. We propose an online PMP based on image correction to measure the three-dimensional shape of the rectilinear moving object. In the proposed method, the deformed patterns captured by a charge-coupled diode camera are reprojected from the oblique view to an aerial view first and then translated based on the feature points of the object. This method makes the object appear stationary in the deformed patterns. Experimental results show the feasibility and efficiency of the proposed method.
Moving object localization using optical flow for pedestrian detection from a moving vehicle.
Hariyono, Joko; Hoang, Van-Dung; Jo, Kang-Hyun
2014-01-01
This paper presents a pedestrian detection method from a moving vehicle using optical flows and histogram of oriented gradients (HOG). A moving object is extracted from the relative motion by segmenting the region representing the same optical flows after compensating the egomotion of the camera. To obtain the optical flow, two consecutive images are divided into grid cells 14 × 14 pixels; then each cell is tracked in the current frame to find corresponding cell in the next frame. Using at least three corresponding cells, affine transformation is performed according to each corresponding cell in the consecutive images, so that conformed optical flows are extracted. The regions of moving object are detected as transformed objects, which are different from the previously registered background. Morphological process is applied to get the candidate human regions. In order to recognize the object, the HOG features are extracted on the candidate region and classified using linear support vector machine (SVM). The HOG feature vectors are used as input of linear SVM to classify the given input into pedestrian/nonpedestrian. The proposed method was tested in a moving vehicle and also confirmed through experiments using pedestrian dataset. It shows a significant improvement compared with original HOG using ETHZ pedestrian dataset.
Brain Activation during Spatial Updating and Attentive Tracking of Moving Targets
ERIC Educational Resources Information Center
Jahn, Georg; Wendt, Julia; Lotze, Martin; Papenmeier, Frank; Huff, Markus
2012-01-01
Keeping aware of the locations of objects while one is moving requires the updating of spatial representations. As long as the objects are visible, attentional tracking is sufficient, but knowing where objects out of view went in relation to one's own body involves an updating of spatial working memory. Here, multiple object tracking was employed…
2017-03-01
A Low- Power Wireless Image Sensor Node with Noise-Robust Moving Object Detection and a Region-of-Interest Based Rate Controller Jong Hwan Ko...Atlanta, GA 30332 USA Contact Author Email: jonghwan.ko@gatech.edu Abstract: This paper presents a low- power wireless image sensor node for...present a low- power wireless image sensor node with a noise-robust moving object detection and region-of-interest based rate controller [Fig. 1]. The
Aksiuta, E F; Ostashev, A V; Sergeev, E V; Aksiuta, V E
1997-01-01
The methods of the information (entropy) error theory were used to make a metrological analysis of the well-known commercial measuring systems for timing an anticipative reaction (AR) to the position of a moving object, which is based on the electromechanical, gas-discharge, and electron principles. The required accuracy of measurement was ascertained to be achieved only by using the systems based on the electron principle of moving object simulation and AR measurement.
Make the First Move: How Infants Learn about Self-Propelled Objects
ERIC Educational Resources Information Center
Rakison, David H.
2006-01-01
In 3 experiments, the author investigated 16- to 20-month-old infants' attention to dynamic and static parts in learning about self-propelled objects. In Experiment 1, infants were habituated to simple noncausal events in which a geometric figure with a single moving part started to move without physical contact from an identical geometric figure…
Binocular Perception of 2D Lateral Motion and Guidance of Coordinated Motor Behavior.
Fath, Aaron J; Snapp-Childs, Winona; Kountouriotis, Georgios K; Bingham, Geoffrey P
2016-04-01
Zannoli, Cass, Alais, and Mamassian (2012) found greater audiovisual lag between a tone and disparity-defined stimuli moving laterally (90-170 ms) than for disparity-defined stimuli moving in depth or luminance-defined stimuli moving laterally or in depth (50-60 ms). We tested if this increased lag presents an impediment to visually guided coordination with laterally moving objects. Participants used a joystick to move a virtual object in several constant relative phases with a laterally oscillating stimulus. Both the participant-controlled object and the target object were presented using a disparity-defined display that yielded information through changes in disparity over time (CDOT) or using a luminance-defined display that additionally provided information through monocular motion and interocular velocity differences (IOVD). Performance was comparable for both disparity-defined and luminance-defined displays in all relative phases. This suggests that, despite lag, perception of lateral motion through CDOT is generally sufficient to guide coordinated motor behavior.
NASA Astrophysics Data System (ADS)
Wu, Bitao; Wu, Gang; Lu, Huaxi; Feng, De-chen
2017-03-01
Fiber optic sensing technology has been widely used in civil infrastructure health monitoring due to its various advantages, e.g., anti-electromagnetic interference, corrosion resistance, etc. This paper investigates a new method for stiffness monitoring and damage identification of bridges under moving vehicle loads using spatially-distributed optical fiber sensors. The relationship between the element stiffness of the bridge and the long-gauge strain history is firstly studied, and a formula which is expressed by the long-gauge strain history is derived for the calculation of the bridge stiffness. Meanwhile, the stiffness coefficient from the formula can be used to identify the damage extent of the bridge. In order to verify the proposed method, a model test of a 1:10 scale bridge-vehicle system is conducted and the long-gauge strain history is obtained through fiber Bragg grating sensors. The test results indicate that the proposed method is suitable for stiffness monitoring and damage assessment of bridges under moving vehicular loads.
Distance estimation and collision prediction for on-line robotic motion planning
NASA Technical Reports Server (NTRS)
Kyriakopoulos, K. J.; Saridis, G. N.
1992-01-01
An efficient method for computing the minimum distance and predicting collisions between moving objects is presented. This problem is incorporated into the framework of an in-line motion-planning algorithm to satisfy collision avoidance between a robot and moving objects modeled as convex polyhedra. In the beginning, the deterministic problem where the information about the objects is assumed to be certain is examined. L(1) or L(infinity) norms are used to represent distance and the problem becomes a linear programming problem. The stochastic problem is formulated where the uncertainty is induced by sensing and the unknown dynamics of the moving obstacles. Two problems are considered: First, filtering of the distance between the robot and the moving object at the present time. Second, prediction of the minimum distance in the future in order to predict the collision time.
Approach for Structurally Clearing an Adaptive Compliant Trailing Edge Flap for Flight
NASA Technical Reports Server (NTRS)
Miller, Eric J.; Lokos, William A.; Cruz, Josue; Crampton, Glen; Stephens, Craig A.; Kota, Sridhar; Ervin, Gregory; Flick, Pete
2015-01-01
The Adaptive Compliant Trailing Edge (ACTE) flap was flown on the National Aeronautics and Space Administration (NASA) Gulfstream GIII testbed at the NASA Armstrong Flight Research Center. This smoothly curving flap replaced the existing Fowler flaps creating a seamless control surface. This compliant structure, developed by FlexSys Inc. in partnership with the Air Force Research Laboratory, supported NASA objectives for airframe structural noise reduction, aerodynamic efficiency, and wing weight reduction through gust load alleviation. A thorough structures airworthiness approach was developed to move this project safely to flight. A combination of industry and NASA standard practice require various structural analyses, ground testing, and health monitoring techniques for showing an airworthy structure. This paper provides an overview of compliant structures design, the structural ground testing leading up to flight, and the flight envelope expansion and monitoring strategy. Flight data will be presented, and lessons learned along the way will be highlighted.
Application of active magnetic bearings in flexible rotordynamic systems - A state-of-the-art review
NASA Astrophysics Data System (ADS)
Siva Srinivas, R.; Tiwari, R.; Kannababu, Ch.
2018-06-01
In this paper a critical review of literature on applications of Active Magnetic Bearings (AMBs) systems in flexible rotordynamic systems have been presented. AMBs find various applications in rotating machinery; however, this paper mainly focuses on works in vibration suppression and associated with the condition monitoring using AMBs. It briefly introduces reader to the AMB working principle, provides details of various hardware components of a typical rotor-AMB test rig, and presents a background of traditional methods of vibration suppression in flexible rotors and the condition monitoring. It then moves on to summarize the basic features of AMB integrated flexible rotor test rigs available in literature with necessary instrumentation and its main objectives. A couple of lookup tables provide summary of important information of test rigs in papers within the scope of this article. Finally, future directions in AMB research within the paper's scope have been suggested.
NASA Astrophysics Data System (ADS)
Ciurapiński, Wieslaw; Dulski, Rafal; Kastek, Mariusz; Szustakowski, Mieczyslaw; Bieszczad, Grzegorz; Życzkowski, Marek; Trzaskawka, Piotr; Piszczek, Marek
2009-09-01
The paper presents the concept of multispectral protection system for perimeter protection for stationary and moving objects. The system consists of active ground radar, thermal and visible cameras. The radar allows the system to locate potential intruders and to control an observation area for system cameras. The multisensor construction of the system ensures significant improvement of detection probability of intruder and reduction of false alarms. A final decision from system is worked out using image data. The method of data fusion used in the system has been presented. The system is working under control of FLIR Nexus system. The Nexus offers complete technology and components to create network-based, high-end integrated systems for security and surveillance applications. Based on unique "plug and play" architecture, system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provides high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering.
Searching for moving objects in HSC-SSP: Pipeline and preliminary results
NASA Astrophysics Data System (ADS)
Chen, Ying-Tung; Lin, Hsing-Wen; Alexandersen, Mike; Lehner, Matthew J.; Wang, Shiang-Yu; Wang, Jen-Hung; Yoshida, Fumi; Komiyama, Yutaka; Miyazaki, Satoshi
2018-01-01
The Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) is currently the deepest wide-field survey in progress. The 8.2 m aperture of the Subaru telescope is very powerful in detecting faint/small moving objects, including near-Earth objects, asteroids, centaurs and Tran-Neptunian objects (TNOs). However, the cadence and dithering pattern of the HSC-SSP are not designed for detecting moving objects, making it difficult to do so systematically. In this paper, we introduce a new pipeline for detecting moving objects (specifically TNOs) in a non-dedicated survey. The HSC-SSP catalogs are sliced into HEALPix partitions. Then, the stationary detections and false positives are removed with a machine-learning algorithm to produce a list of moving object candidates. An orbit linking algorithm and visual inspections are executed to generate the final list of detected TNOs. The preliminary results of a search for TNOs using this new pipeline on data from the first HSC-SSP data release (2014 March to 2015 November) present 231 TNO/Centaurs candidates. The bright candidates with Hr < 7.7 and i > 5 show that the best-fitting slope of a single power law to absolute magnitude distribution is 0.77. The g - r color distribution of hot HSC-SSP TNOs indicates a bluer peak at g - r = 0.9, which is consistent with the bluer peak of the bimodal color distribution in literature.
Memory-Based Multiagent Coevolution Modeling for Robust Moving Object Tracking
Wang, Yanjiang; Qi, Yujuan; Li, Yongping
2013-01-01
The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods. PMID:23843739
Memory-based multiagent coevolution modeling for robust moving object tracking.
Wang, Yanjiang; Qi, Yujuan; Li, Yongping
2013-01-01
The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods.
A case study on displacement analysis of Vasa warship
NASA Astrophysics Data System (ADS)
Eshagh, Mehdi; Johansson, Filippa; Karlsson, Lenita; Horemuz, Milan
2018-04-01
Monitoring deformation of man-made structures is very important to prevent them from a risk of collapse and save lives. Such a process is also used for monitoring change in historical objects, which are deforming continuously with time. An example of this is the Vasa warship, which was under water for about 300 years. The ship was raised from the bottom of the sea and is kept in the Vasa museum in Stockholm. A geodetic network with points on the museum building and the ship's body has been established and measured for 12 years for monitoring the ship's deformation. The coordinate time series of each point on the ship and their uncertainties have been estimated epoch-wisely. In this paper, our goal is to statistically analyse the ship's hull movements. By fitting a quadratic polynomial to the coordinate time series of each point of the hull, its acceleration and velocity are estimated. In addition, their significance is tested by comparing them with their respective estimated errors after the fitting. Our numerical investigations show that the backside of the ship, having highest elevation and slope, has moved vertically faster than the other places by a velocity and an acceleration of about 2 mm/year and 0.1 mm/year2, respectively and this part of the ship is the weakest with a higher risk of collapse. The central parts of the ship are more stable as the ship hull is almost vertical and closer to the floor. Generally, the hull is moving towards its port and downwards
Method and apparatus for hybrid position/force control of multi-arm cooperating robots
NASA Technical Reports Server (NTRS)
Hayati, Samad A. (Inventor)
1989-01-01
Two or more robotic arms having end effectors rigidly attached to an object to be moved are disclosed. A hybrid position/force control system is provided for driving each of the robotic arms. The object to be moved is represented as having a total mass that consists of the actual mass of the object to be moved plus the mass of the moveable arms that are rigidly attached to the moveable object. The arms are driven in a positive way by the hybrid control system to assure that each arm shares in the position/force applied to the object. The burden of actuation is shared by each arm in a non-conflicting way as the arm independently control the position of, and force upon, a designated point on the object.
Sokal, Brad; Uswatte, Gitendra; Barman, Joydip; Brewer, Michael; Byrom, Ezekiel; Latten, Jessica; Joseph, Jeethu; Serafim, Camila; Ghaffari, Touraj; Sarkar, Nilanjan
2014-03-01
To test the convergent validity of an objective method, Sensor-Enabled Radio-frequency Identification System for Monitoring Arm Activity (SERSMAA), that distinguishes between functional and nonfunctional activity. Cross-sectional study. Laboratory. Participants (N=25) were ≥0.2 years poststroke (median, 9) with a wide range of severity of upper-extremity hemiparesis. Not applicable. After stroke, laboratory tests of the motor capacity of the more-affected arm poorly predict spontaneous use of that arm in daily life. However, available subjective methods for measuring everyday arm use are vulnerable to self-report biases, whereas available objective methods only provide information on the amount of activity without regard to its relation with function. The SERSMAA consists of a proximity-sensor receiver on the more-affected arm and multiple units placed on objects. Functional activity is signaled when the more-affected arm is close to an object that is moved. Participants were videotaped during a laboratory simulation of an everyday activity, that is, setting a table with cups, bowls, and plates instrumented with transmitters. Observers independently coded the videos in 2-second blocks with a validated system for classifying more-affected arm activity. There was a strong correlation (r=.87, P<.001) between time that the more-affected arm was used for handling objects according to the SERSMAA and functional activity according to the observers. The convergent validity of SERSMAA for measuring more-affected arm functional activity after stroke was supported in a simulation of everyday activity. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
HuMOVE: a low-invasive wearable monitoring platform in sexual medicine.
Ciuti, Gastone; Nardi, Matteo; Valdastri, Pietro; Menciassi, Arianna; Basile Fasolo, Ciro; Dario, Paolo
2014-10-01
To investigate an accelerometer-based wearable system, named Human Movement (HuMOVE) platform, designed to enable quantitative and continuous measurement of sexual performance with minimal invasiveness and inconvenience for users. Design, implementation, and development of HuMOVE, a wearable platform equipped with an accelerometer sensor for monitoring inertial parameters for sexual performance assessment and diagnosis, were performed. The system enables quantitative measurement of movement parameters during sexual intercourse, meeting the requirements of wearability, data storage, sampling rate, and interfacing methods, which are fundamental for human sexual intercourse performance analysis. HuMOVE was validated through characterization using a controlled experimental test bench and evaluated in a human model during simulated sexual intercourse conditions. HuMOVE demonstrated to be a robust and quantitative monitoring platform and a reliable candidate for sexual performance evaluation and diagnosis. Characterization analysis on the controlled experimental test bench demonstrated an accurate correlation between the HuMOVE system and data from a reference displacement sensor. Experimental tests in the human model during simulated intercourse conditions confirmed the accuracy of the sexual performance evaluation platform and the effectiveness of the selected and derived parameters. The obtained outcomes also established the project expectations in terms of usability and comfort, evidenced by the questionnaires that highlighted the low invasiveness and acceptance of the device. To the best of our knowledge, HuMOVE platform is the first device for human sexual performance analysis compatible with sexual intercourse; the system has the potential to be a helpful tool for physicians to accurately classify sexual disorders, such as premature or delayed ejaculation. Copyright © 2014 Elsevier Inc. All rights reserved.
Ultralow-dose, feedback imaging with laser-Compton X-ray and laser-Compton gamma ray sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barty, Christopher P. J.
Ultralow-dose, x-ray or gamma-ray imaging is based on fast, electronic control of the output of a laser-Compton x-ray or gamma-ray source (LCXS or LCGS). X-ray or gamma-ray shadowgraphs are constructed one (or a few) pixel(s) at a time by monitoring the LCXS or LCGS beam energy required at each pixel of the object to achieve a threshold level of detectability at the detector. An example provides that once the threshold for detection is reached, an electronic or optical signal is sent to the LCXS/LCGS that enables a fast optical switch that diverts, either in space or time the laser pulsesmore » used to create Compton photons. In this way, one prevents the object from being exposed to any further Compton x-rays or gamma-rays until either the laser-Compton beam or the object are moved so that a new pixel location may be illumination.« less
NASA Technical Reports Server (NTRS)
Hall, Justin R.; Hastrup, Rolf C.
1990-01-01
The principal challenges in providing effective deep space navigation, telecommunications, and information management architectures and designs for Mars exploration support are presented. The fundamental objectives are to provide the mission with the means to monitor and control mission elements, obtain science, navigation, and engineering data, compute state vectors and navigate, and to move these data efficiently and automatically between mission nodes for timely analysis and decision making. New requirements are summarized, and related issues and challenges including the robust connectivity for manned and robotic links, are identified. Enabling strategies are discussed, and candidate architectures and driving technologies are described.
Laser-Based Trespassing Prediction in Restrictive Environments: A Linear Approach
Cheein, Fernando Auat; Scaglia, Gustavo
2012-01-01
Stationary range laser sensors for intruder monitoring, restricted space violation detections and workspace determination are extensively used in risky environments. In this work we present a linear based approach for predicting the presence of moving agents before they trespass a laser-based restricted space. Our approach is based on the Taylor's series expansion of the detected objects' movements. The latter makes our proposal suitable for embedded applications. In the experimental results (carried out in different scenarios) presented herein, our proposal shows 100% of effectiveness in predicting trespassing situations. Several implementation results and statistics analysis showing the performance of our proposal are included in this work.
Downstream Fabry-Perot interferometer for acoustic wave monitoring in photoacoustic tomography.
Nuster, Robert; Gruen, Hubert; Reitinger, Bernhard; Burgholzer, Peter; Gratt, Sibylle; Passler, Klaus; Paltauf, Guenther
2011-03-15
An optical detection setup consisting of a focused laser beam fed into a downstream Fabry-Perot interferometer (FPI) for demodulation of acoustically generated optical phase variations is investigated for its applicability in photoacoustic tomography. The device measures the time derivative of acoustic signals integrated along the beam. Compared to a setup where the detection beam is part of a Mach-Zehnder interferometer, the signal-to-noise ratio of the FPI is lower, but the image quality of the two devices is similar. Using the FPI in a photoacoustic tomograph allows scanning the probe beam around the imaging object without moving the latter.
NASA Astrophysics Data System (ADS)
Hall, Justin R.; Hastrup, Rolf C.
1990-10-01
The principal challenges in providing effective deep space navigation, telecommunications, and information management architectures and designs for Mars exploration support are presented. The fundamental objectives are to provide the mission with the means to monitor and control mission elements, obtain science, navigation, and engineering data, compute state vectors and navigate, and to move these data efficiently and automatically between mission nodes for timely analysis and decision making. New requirements are summarized, and related issues and challenges including the robust connectivity for manned and robotic links, are identified. Enabling strategies are discussed, and candidate architectures and driving technologies are described.
Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space
Chen, Min; Hashimoto, Koichi
2017-01-01
Motivated by biological interests in analyzing navigation behaviors of flying animals, we attempt to build a system measuring their motion states. To do this, in this paper, we build a vision system to detect unknown fast moving objects within a given space, calculating their motion parameters represented by positions and poses. We proposed a novel method to detect reliable interest points from images of moving objects, which can be hardly detected by general purpose interest point detectors. 3D points reconstructed using these interest points are then grouped and maintained for detected objects, according to a careful schedule, considering appearance and perspective changes. In the estimation step, a method is introduced to adapt the robust estimation procedure used for dense point set to the case for sparse set, reducing the potential risk of greatly biased estimation. Experiments are conducted against real scenes, showing the capability of the system of detecting multiple unknown moving objects and estimating their positions and poses. PMID:29206189
Early Knowledge of Object Motion: Continuity and Inertia.
ERIC Educational Resources Information Center
Spelke, Elizabeth; And Others
1994-01-01
Investigated whether infants infer that a hidden, freely moving object will move continuously and smoothly. Six- to 10- month olds inferred that the object's path would be connected and unobstructed, in accord with continuity. Younger infants did not infer this, in accord with inertia. At 8 and 10 months, knowledge of inertia emerged but remained…
Lee, Young-Sook; Chung, Wan-Young
2012-01-01
Vision-based abnormal event detection for home healthcare systems can be greatly improved using visual sensor-based techniques able to detect, track and recognize objects in the scene. However, in moving object detection and tracking processes, moving cast shadows can be misclassified as part of objects or moving objects. Shadow removal is an essential step for developing video surveillance systems. The goal of the primary is to design novel computer vision techniques that can extract objects more accurately and discriminate between abnormal and normal activities. To improve the accuracy of object detection and tracking, our proposed shadow removal algorithm is employed. Abnormal event detection based on visual sensor by using shape features variation and 3-D trajectory is presented to overcome the low fall detection rate. The experimental results showed that the success rate of detecting abnormal events was 97% with a false positive rate of 2%. Our proposed algorithm can allow distinguishing diverse fall activities such as forward falls, backward falls, and falling asides from normal activities. PMID:22368486
The Reach and Impact of Direct Marketing via Brand Websites of Moist Snuff
Timberlake, David S.; Bruckner, Tim A.; Ngo, Vyvian; Nikitin, Dmitriy
2016-01-01
Objective Restricting tobacco marketing is a key element in the US Food and Drug Administration’s (FDA) public health framework for regulating tobacco. Given the dearth of empirical data on direct marketing, the objective of this study was to assess the reach and impact of promotions on sales through snuff websites. Methods Nine brands of snuff, representing more than 90% of market share, were monitored for content of coupons, sweepstakes, contests, and other promotions on their respective websites. Monthly sales data and website traffic for the 9 brands, corresponding to the 48-month period of January 2011 through December 2014, were obtained from proprietary sources. A time-series analysis, based on the autoregressive, integrated, moving average (ARIMA) method, was employed for testing the relationships among sales, website visits, and promotions. Results Website traffic increased substantially during the promotion periods for most brands. Time-series analyses, however, revealed that promotion periods for 5 of 7 brands did not significantly correlate with monthly snuff sales. Conclusions The success in attracting tobacco consumers to website promotions demonstrates the marketing reach of snuff manufacturers. This form of direct marketing should be monitored by the FDA given evidence of adolescents’ exposure to cigarette brand websites. PMID:27517061
An artificial reality environment for remote factory control and monitoring
NASA Technical Reports Server (NTRS)
Kosta, Charles Paul; Krolak, Patrick D.
1993-01-01
Work has begun on the merger of two well known systems, VEOS (HITLab) and CLIPS (NASA). In the recent past, the University of Massachusetts Lowell developed a parallel version of NASA CLIPS, called P-CLIPS. This modification allows users to create smaller expert systems which are able to communicate with each other to jointly solve problems. With the merger of a VEOS message system, PCLIPS-V can now act as a group of entities working within VEOS. To display the 3D virtual world we have been using a graphics package called HOOPS, from Ithaca Software. The artificial reality environment we have set up contains actors and objects as found in our Lincoln Logs Factory of the Future project. The environment allows us to view and control the objects within the virtual world. All communication between the separate CLIPS expert systems is done through VEOS. A graphical renderer generates camera views on X-Windows devices; Head Mounted Devices are not required. This allows more people to make use of this technology. We are experimenting with different types of virtual vehicles to give the user a sense that he or she is actually moving around inside the factory looking ahead through windows and virtual monitors.
Digital Image Correlation for Performance Monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palaviccini, Miguel; Turner, Daniel Z.; Herzberg, Michael
2016-02-01
Evaluating the health of a mechanism requires more than just a binary evaluation of whether an operation was completed. It requires analyzing more comprehensive, full-field data. Health monitoring is a process of nondestructively identifying characteristics that indicate the fitness of an engineered component. In order to monitor unit health in a production setting, an automated test system must be created to capture the motion of mechanism parts in a real-time and non-intrusive manner. One way to accomplish this is by using high-speed video (HSV) and Digital Image Correlation (DIC). In this approach, individual frames of the video are analyzed tomore » track the motion of mechanism components. The derived performance metrics allow for state-of-health monitoring and improved fidelity of mechanism modeling. The results are in-situ state-of-health identification and performance prediction. This paper introduces basic concepts of this test method, and discusses two main themes: the use of laser marking to add fiducial patterns to mechanism components, and new software developed to track objects with complex shapes, even as they move behind obstructions. Finally, the implementation of these tests into an automated tester is discussed.« less
Acoustic system for material transport
NASA Technical Reports Server (NTRS)
Barmatz, M. B.; Trinh, E. H.; Wang, T. G.; Elleman, D. D.; Jacobi, N. (Inventor)
1983-01-01
An object within a chamber is acoustically moved by applying wavelengths of different modes to the chamber to move the object between pressure wells formed by the modes. In one system, the object is placed in one end of the chamber while a resonant mode, applied along the length of the chamber, produces a pressure well at the location. The frequency is then switched to a second mode that produces a pressure well at the center of the chamber, to draw the object. When the object reaches the second pressure well and is still traveling towards the second end of the chamber, the acoustic frequency is again shifted to a third mode (which may equal the first model) that has a pressure well in the second end portion of the chamber, to draw the object. A heat source may be located near the second end of the chamber to heat the sample, and after the sample is heated it can be cooled by moving it in a corresponding manner back to the first end of the chamber. The transducers for levitating and moving the object may be all located at the cool first end of the chamber.
Visual context modulates potentiation of grasp types during semantic object categorization.
Kalénine, Solène; Shapiro, Allison D; Flumini, Andrea; Borghi, Anna M; Buxbaum, Laurel J
2014-06-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use-compatible, as compared with move-compatible, contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions.
Visual context modulates potentiation of grasp types during semantic object categorization
Kalénine, Solène; Shapiro, Allison D.; Flumini, Andrea; Borghi, Anna M.; Buxbaum, Laurel J.
2013-01-01
Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it, but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use- compared to move-compatible contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions. PMID:24186270
Monitoring and analysis of combustion aerosol emissions from fast moving diesel trains.
Burchill, Michael J; Gramotnev, Dmitri K; Gramotnev, Galina; Davison, Brian M; Flegg, Mark B
2011-02-01
In this paper we report the results of the detailed monitoring and analysis of combustion emissions from fast moving diesel trains. A new highly efficient monitoring methodology is proposed based on the measurements of the total number concentration (TNC) of combustion aerosols at a fixed point (on a bridge overpassing the railway) inside the violently mixing zone created by a fast moving train. Applicability conditions for the proposed methodology are presented, discussed and linked to the formation of the stable and uniform mixing zone. In particular, it is demonstrated that if such a mixing zone is formed, the monitoring results are highly consistent, repeatable (with typically negligible statistical errors and dispersion), stable with respect to the external atmospheric turbulence and result in an unusual pattern of the aerosol evolution with two or three distinct TNC maximums. It is also shown that the stability and uniformity of the created mixing zone (as well as the repeatability of the monitoring results) increase with increasing length of the train (with an estimated critical train length of ~10 carriages, at the speed of ~150km/h). The analysis of the obtained evolutionary dependencies of aerosol TNC suggests that the major possible mechanisms responsible for the formation of the distinct concentration maximums are condensation (the second maximum) and thermal fragmentation of solid nanoparticle aggregates (third maximum). The obtained results and the new methodology will be important for monitoring and analysis of combustion emissions from fast moving trains, and for the determination of the impact of rail networks on the atmospheric environment and human exposure to combustion emissions. Copyright © 2010 Elsevier B.V. All rights reserved.
4D Near Real-Time Environmental Monitoring Using Highly Temporal LiDAR
NASA Astrophysics Data System (ADS)
Höfle, Bernhard; Canli, Ekrem; Schmitz, Evelyn; Crommelinck, Sophie; Hoffmeister, Dirk; Glade, Thomas
2016-04-01
The last decade has witnessed extensive applications of 3D environmental monitoring with the LiDAR technology, also referred to as laser scanning. Although several automatic methods were developed to extract environmental parameters from LiDAR point clouds, only little research has focused on highly multitemporal near real-time LiDAR (4D-LiDAR) for environmental monitoring. Large potential of applying 4D-LiDAR is given for landscape objects with high and varying rates of change (e.g. plant growth) and also for phenomena with sudden unpredictable changes (e.g. geomorphological processes). In this presentation we will report on the most recent findings of the research projects 4DEMON (http://uni-heidelberg.de/4demon) and NoeSLIDE (https://geomorph.univie.ac.at/forschung/projekte/aktuell/noeslide/). The method development in both projects is based on two real-world use cases: i) Surface parameter derivation of agricultural crops (e.g. crop height) and ii) change detection of landslides. Both projects exploit the "full history" contained in the LiDAR point cloud time series. One crucial initial step of 4D-LiDAR analysis is the co-registration over time, 3D-georeferencing and time-dependent quality assessment of the LiDAR point cloud time series. Due to the high amount of datasets (e.g. one full LiDAR scan per day), the procedure needs to be performed fully automatically. Furthermore, the online near real-time 4D monitoring system requires to set triggers that can detect removal or moving of tie reflectors (used for co-registration) or the scanner itself. This guarantees long-term data acquisition with high quality. We will present results from a georeferencing experiment for 4D-LiDAR monitoring, which performs benchmarking of co-registration, 3D-georeferencing and also fully automatic detection of events (e.g. removal/moving of reflectors or scanner). Secondly, we will show our empirical findings of an ongoing permanent LiDAR observation of a landslide (Gresten, Austria) and an agricultural maize crop stand (Heidelberg, Germany). This research demonstrates the potential and also limitations of fully automated, near real-time 4D LiDAR monitoring in geosciences.
ERIC Educational Resources Information Center
Gogate, Lakshmi J.; Bahrick, Lorraine E.
1998-01-01
Investigated 7-month olds' ability to relate vowel sounds with objects when intersensory redundancy was present versus absent. Found that infants detected a mismatch in the vowel-object pairs in the moving-synchronous condition but not in the still or moving-asynchronous condition, demonstrating that temporal synchrony between vocalizations and…
ERIC Educational Resources Information Center
Young, Timothy; Guy, Mark
2011-01-01
Students have a difficult time understanding force, especially when dealing with a moving object. Many forces can be acting on an object at the same time, causing it to stay in one place or move. By directly observing these forces, students can better understand the effect these forces have on an object. With a simple, student-built device called…
Tracking Objects with Networked Scattered Directional Sensors
NASA Astrophysics Data System (ADS)
Plarre, Kurt; Kumar, P. R.
2007-12-01
We study the problem of object tracking using highly directional sensors—sensors whose field of vision is a line or a line segment. A network of such sensors monitors a certain region of the plane. Sporadically, objects moving in straight lines and at a constant speed cross the region. A sensor detects an object when it crosses its line of sight, and records the time of the detection. No distance or angle measurements are available. The task of the sensors is to estimate the directions and speeds of the objects, and the sensor lines, which are unknown a priori. This estimation problem involves the minimization of a highly nonconvex cost function. To overcome this difficulty, we introduce an algorithm, which we call "adaptive basis algorithm." This algorithm is divided into three phases: in the first phase, the algorithm is initialized using data from six sensors and four objects; in the second phase, the estimates are updated as data from more sensors and objects are incorporated. The third phase is an optional coordinated transformation. The estimation is done in an "ad-hoc" coordinate system, which we call "adaptive coordinate system." When more information is available, for example, the location of six sensors, the estimates can be transformed to the "real-world" coordinate system. This constitutes the third phase.
3D shape measurement of moving object with FFT-based spatial matching
NASA Astrophysics Data System (ADS)
Guo, Qinghua; Ruan, Yuxi; Xi, Jiangtao; Song, Limei; Zhu, Xinjun; Yu, Yanguang; Tong, Jun
2018-03-01
This work presents a new technique for 3D shape measurement of moving object in translational motion, which finds applications in online inspection, quality control, etc. A low-complexity 1D fast Fourier transform (FFT)-based spatial matching approach is devised to obtain accurate object displacement estimates, and it is combined with single shot fringe pattern prolometry (FPP) techniques to achieve high measurement performance with multiple captured images through coherent combining. The proposed technique overcomes some limitations of existing ones. Specifically, the placement of marks on object surface and synchronization between projector and camera are not needed, the velocity of the moving object is not required to be constant, and there is no restriction on the movement trajectory. Both simulation and experimental results demonstrate the effectiveness of the proposed technique.
"Up" or "down" that makes the difference. How giant honeybees (Apis dorsata) see the world.
Koeniger, Nikolaus; Kurze, Christoph; Phiancharoen, Mananya; Koeniger, Gudrun
2017-01-01
A. dorsata builds its large exposed comb high in trees or under ledges of high rocks. The "open" nest of A. dorsata, shielded (only!) by multiple layers of bees, is highly vulnerable to any kind of direct contact or close range attacks from predators. Therefore, guard bees of the outer layer of A. dorsata's nest monitor the vicinity for possible hazards and an effective risk assessment is required. Guard bees, however, are frequently exposed to different objects like leaves, twigs and other tree litter passing the nest from above and falling to the ground. Thus, downward movement of objects past the nest might be used by A. dorsata to classify these visual stimuli near the nest as "harmless". To test the effect of movement direction on defensive responses, we used circular black discs that were moved down or up in front of colonies and recorded the number of guard bees flying towards the disc. The size of the disc (diameter from 8 cm to 50 cm) had an effect on the number of guard bees responding, the bigger the plate the more bees started from the nest. The direction of a disc's movement had a dramatic effect on the attraction. We found a significantly higher number of attacks, when discs were moved upwards compared to downward movements (GLMM (estimate ± s.e.) 1.872 ± 0.149, P < 0.001). Our results demonstrate for the first time that the vertical direction of movement of an object can be important for releasing defensive behaviour. Upward movement of dark objects near the colony might be an innate releaser of attack flights. At the same time, downward movement is perceived as a "harmless" stimulus.
Error analysis of motion correction method for laser scanning of moving objects
NASA Astrophysics Data System (ADS)
Goel, S.; Lohani, B.
2014-05-01
The limitation of conventional laser scanning methods is that the objects being scanned should be static. The need of scanning moving objects has resulted in the development of new methods capable of generating correct 3D geometry of moving objects. Limited literature is available showing development of very few methods capable of catering to the problem of object motion during scanning. All the existing methods utilize their own models or sensors. Any studies on error modelling or analysis of any of the motion correction methods are found to be lacking in literature. In this paper, we develop the error budget and present the analysis of one such `motion correction' method. This method assumes availability of position and orientation information of the moving object which in general can be obtained by installing a POS system on board or by use of some tracking devices. It then uses this information along with laser scanner data to apply correction to laser data, thus resulting in correct geometry despite the object being mobile during scanning. The major application of this method lie in the shipping industry to scan ships either moving or parked in the sea and to scan other objects like hot air balloons or aerostats. It is to be noted that the other methods of "motion correction" explained in literature can not be applied to scan the objects mentioned here making the chosen method quite unique. This paper presents some interesting insights in to the functioning of "motion correction" method as well as a detailed account of the behavior and variation of the error due to different sensor components alone and in combination with each other. The analysis can be used to obtain insights in to optimal utilization of available components for achieving the best results.
Rings of earth. [orbiting bands of space debris
NASA Technical Reports Server (NTRS)
Goldstein, Richard M.; Randolph, L. W.
1992-01-01
Small particles moving at an orbital velocity of 7.6 kilometers per second can present a considerable hazard to human activity in space. For astronauts outside of the protective shielding of their space vehicles, such particles can be lethal. The powerful radar at NASA's Goldstone Deep Communications Complex was used to monitor such orbital debris. This radar can detect metallic objects as small as 1.8 mm in diameter at 600 km altitude. The results of the preliminary survey show a flux (at 600 km altitude) of 6.4 objects per square kilometer per day of equivalent size of 1.8 mm or larger. Forty percent of the observed particles appear to be concentrated into two orbits. An orbital ring with the same inclination as the radar (35.1 degrees) is suggested. However, an orbital band with a much higher inclination (66 degrees) is also a possibility.
Some characteristics of optokinetic eye-movement patterns : a comparative study.
DOT National Transportation Integrated Search
1970-07-01
Long-associated with transportation ('railroad nystagmus'), optokinetic (OPK) nystagmus is an eye-movement reaction which occurs when a series of moving objects crosses the visual field or when an observer moves past a series of objects. Similar cont...
A-Track: A new approach for detection of moving objects in FITS images
NASA Astrophysics Data System (ADS)
Atay, T.; Kaplan, M.; Kilic, Y.; Karapinar, N.
2016-10-01
We have developed a fast, open-source, cross-platform pipeline, called A-Track, for detecting the moving objects (asteroids and comets) in sequential telescope images in FITS format. The pipeline is coded in Python 3. The moving objects are detected using a modified line detection algorithm, called MILD. We tested the pipeline on astronomical data acquired by an SI-1100 CCD with a 1-meter telescope. We found that A-Track performs very well in terms of detection efficiency, stability, and processing time. The code is hosted on GitHub under the GNU GPL v3 license.
Effects of sport expertise on representational momentum during timing control.
Nakamoto, Hiroki; Mori, Shiro; Ikudome, Sachi; Unenaka, Satoshi; Imanaka, Kuniyasu
2015-04-01
Sports involving fast visual perception require players to compensate for delays in neural processing of visual information. Memory for the final position of a moving object is distorted forward along its path of motion (i.e., "representational momentum," RM). This cognitive extrapolation of visual perception might compensate for the neural delay in interacting appropriately with a moving object. The present study examined whether experienced batters cognitively extrapolate the location of a fast-moving object and whether this extrapolation is associated with coincident timing control. Nine expert and nine novice baseball players performed a prediction motion task in which a target moved from one end of a straight 400-cm track at a constant velocity. In half of the trials, vision was suddenly occluded when the target reached the 200-cm point (occlusion condition). Participants had to press a button concurrently with the target arrival at the end of the track and verbally report their subjective assessment of the first target-occluded position. Experts showed larger RM magnitude (cognitive extrapolation) than did novices in the occlusion condition. RM magnitude and timing errors were strongly correlated in the fast velocity condition in both experts and novices, whereas in the slow velocity condition, a significant correlation appeared only in experts. This suggests that experts can cognitively extrapolate the location of a moving object according to their anticipation and, as a result, potentially circumvent neural processing delays. This process might be used to control response timing when interacting with moving objects.
Static latching arrangement and method
Morrison, Larry
1988-01-01
A latching assembly for use in latching a cable to and unlatching it from a given object in order to move an object from one location to another is disclosed herein. This assembly includes a weighted sphere mounted to one end of a cable so as to rotate about a specific diameter of the sphere. The assembly also includes a static latch adapted for connection with the object to be moved. This latch includes an internal latching cavity for containing the sphere in a latching condition and a series of surfaces and openings which cooperate with the sphere in order to move the sphere into and out of the latching cavity and thereby connect the cable to and disconnect it from the latch without using any moving parts on the latch itself.
Inattentional blindness is influenced by exposure time not motion speed.
Kreitz, Carina; Furley, Philip; Memmert, Daniel
2016-01-01
Inattentional blindness is a striking phenomenon in which a salient object within the visual field goes unnoticed because it is unexpected, and attention is focused elsewhere. Several attributes of the unexpected object, such as size and animacy, have been shown to influence the probability of inattentional blindness. At present it is unclear whether or how the speed of a moving unexpected object influences inattentional blindness. We demonstrated that inattentional blindness rates are considerably lower if the unexpected object moves more slowly, suggesting that it is the mere exposure time of the object rather than a higher saliency potentially induced by higher speed that determines the likelihood of its detection. Alternative explanations could be ruled out: The effect is not based on a pop-out effect arising from different motion speeds in relation to the primary-task stimuli (Experiment 2), nor is it based on a higher saliency of slow-moving unexpected objects (Experiment 3).
Upside-down: Perceived space affects object-based attention.
Papenmeier, Frank; Meyerhoff, Hauke S; Brockhoff, Alisa; Jahn, Georg; Huff, Markus
2017-07-01
Object-based attention influences the subjective metrics of surrounding space. However, does perceived space influence object-based attention, as well? We used an attentive tracking task that required sustained object-based attention while objects moved within a tracking space. We manipulated perceived space through the availability of depth cues and varied the orientation of the tracking space. When rich depth cues were available (appearance of a voluminous tracking space), the upside-down orientation of the tracking space (objects appeared to move high on a ceiling) caused a pronounced impairment of tracking performance compared with an upright orientation of the tracking space (objects appeared to move on a floor plane). In contrast, this was not the case when reduced depth cues were available (appearance of a flat tracking space). With a preregistered second experiment, we showed that those effects were driven by scene-based depth cues and not object-based depth cues. We conclude that perceived space affects object-based attention and that object-based attention and perceived space are closely interlinked. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
ERIC Educational Resources Information Center
Ferrara, Katrina; Hoffman, James E.; O'Hearn, Kirsten; Landau, Barbara
2016-01-01
The ability to track moving objects is a crucial skill for performance in everyday spatial tasks. The tracking mechanism depends on representation of moving items as coherent entities, which follow the spatiotemporal constraints of objects in the world. In the present experiment, participants tracked 1 to 4 targets in a display of 8 identical…
3D morphology reconstruction using linear array CCD binocular stereo vision imaging system
NASA Astrophysics Data System (ADS)
Pan, Yu; Wang, Jinjiang
2018-01-01
Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.
Method for stitching microbial images using a neural network
NASA Astrophysics Data System (ADS)
Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.; Tolstova, I. V.
2017-05-01
Currently an analog microscope has a wide distribution in the following fields: medicine, animal husbandry, monitoring technological objects, oceanography, agriculture and others. Automatic method is preferred because it will greatly reduce the work involved. Stepper motors are used to move the microscope slide and allow to adjust the focus in semi-automatic or automatic mode view with transfer images of microbiological objects from the eyepiece of the microscope to the computer screen. Scene analysis allows to locate regions with pronounced abnormalities for focusing specialist attention. This paper considers the method for stitching microbial images, obtained of semi-automatic microscope. The method allows to keep the boundaries of objects located in the area of capturing optical systems. Objects searching are based on the analysis of the data located in the area of the camera view. We propose to use a neural network for the boundaries searching. The stitching image boundary is held of the analysis borders of the objects. To auto focus, we use the criterion of the minimum thickness of the line boundaries of object. Analysis produced the object located in the focal axis of the camera. We use method of recovery of objects borders and projective transform for the boundary of objects which are based on shifted relative to the focal axis. Several examples considered in this paper show the effectiveness of the proposed approach on several test images.
Track-Before-Detect Algorithm for Faint Moving Objects based on Random Sampling and Consensus
NASA Astrophysics Data System (ADS)
Dao, P.; Rast, R.; Schlaegel, W.; Schmidt, V.; Dentamaro, A.
2014-09-01
There are many algorithms developed for tracking and detecting faint moving objects in congested backgrounds. One obvious application is detection of targets in images where each pixel corresponds to the received power in a particular location. In our application, a visible imager operated in stare mode observes geostationary objects as fixed, stars as moving and non-geostationary objects as drifting in the field of view. We would like to achieve high sensitivity detection of the drifters. The ability to improve SNR with track-before-detect (TBD) processing, where target information is collected and collated before the detection decision is made, allows respectable performance against dim moving objects. Generally, a TBD algorithm consists of a pre-processing stage that highlights potential targets and a temporal filtering stage. However, the algorithms that have been successfully demonstrated, e.g. Viterbi-based and Bayesian-based, demand formidable processing power and memory. We propose an algorithm that exploits the quasi constant velocity of objects, the predictability of the stellar clutter and the intrinsically low false alarm rate of detecting signature candidates in 3-D, based on an iterative method called "RANdom SAmple Consensus” and one that can run real-time on a typical PC. The technique is tailored for searching objects with small telescopes in stare mode. Our RANSAC-MT (Moving Target) algorithm estimates parameters of a mathematical model (e.g., linear motion) from a set of observed data which contains a significant number of outliers while identifying inliers. In the pre-processing phase, candidate blobs were selected based on morphology and an intensity threshold that would normally generate unacceptable level of false alarms. The RANSAC sampling rejects candidates that conform to the predictable motion of the stars. Data collected with a 17 inch telescope by AFRL/RH and a COTS lens/EM-CCD sensor by the AFRL/RD Satellite Assessment Center is used to assess the performance of the algorithm. In the second application, a visible imager operated in sidereal mode observes geostationary objects as moving, stars as fixed except for field rotation, and non-geostationary objects as drifting. RANSAC-MT is used to detect the drifter. In this set of data, the drifting space object was detected at a distance of 13800 km. The AFRL/RH set of data, collected in the stare mode, contained the signature of two geostationary satellites. The signature of a moving object was simulated and added to the sequence of frames to determine the sensitivity in magnitude. The performance compares well with the more intensive TBD algorithms reported in the literature.
Calibration and validation of wearable monitors.
Bassett, David R; Rowlands, Alex; Trost, Stewart G
2012-01-01
Wearable monitors are increasingly being used to objectively monitor physical activity in research studies within the field of exercise science. Calibration and validation of these devices are vital to obtaining accurate data. This article is aimed primarily at the physical activity measurement specialist, although the end user who is conducting studies with these devices also may benefit from knowing about this topic. Initially, wearable physical activity monitors should undergo unit calibration to ensure interinstrument reliability. The next step is to simultaneously collect both raw signal data (e.g., acceleration) from the wearable monitors and rates of energy expenditure, so that algorithms can be developed to convert the direct signals into energy expenditure. This process should use multiple wearable monitors and a large and diverse subject group and should include a wide range of physical activities commonly performed in daily life (from sedentary to vigorous). New methods of calibration now use "pattern recognition" approaches to train the algorithms on various activities, and they provide estimates of energy expenditure that are much better than those previously available with the single-regression approach. Once a method of predicting energy expenditure has been established, the next step is to examine its predictive accuracy by cross-validating it in other populations. In this article, we attempt to summarize the best practices for calibration and validation of wearable physical activity monitors. Finally, we conclude with some ideas for future research ideas that will move the field of physical activity measurement forward.
Detection of dominant flow and abnormal events in surveillance video
NASA Astrophysics Data System (ADS)
Kwak, Sooyeong; Byun, Hyeran
2011-02-01
We propose an algorithm for abnormal event detection in surveillance video. The proposed algorithm is based on a semi-unsupervised learning method, a kind of feature-based approach so that it does not detect the moving object individually. The proposed algorithm identifies dominant flow without individual object tracking using a latent Dirichlet allocation model in crowded environments. It can also automatically detect and localize an abnormally moving object in real-life video. The performance tests are taken with several real-life databases, and their results show that the proposed algorithm can efficiently detect abnormally moving objects in real time. The proposed algorithm can be applied to any situation in which abnormal directions or abnormal speeds are detected regardless of direction.
Liu, Xiaobo
2015-07-01
The U.S. Environmental Protection Agency's (EPA) Motor Vehicle Emission Simulator (MOVES) is required by the EPA to replace Mobile 6 as an official on-road emission model. Incorporated with annual vehicle mile traveled (VMT) by Highways Performance Monitoring System (HPMS) vehicle class, MOVES allocates VMT from HPMS to MOVES source (vehicle) types and calculates emission burden by MOVES source type. However, the calculated running emission burden by MOVES source type may be deviated from the actual emission burden because of MOVES source population, specifically the population fraction by MOVES source type in HPMS vehicle class. The deviation is also the result of the use of the universal set of parameters, i.e., relative mileage accumulation rate (relativeMAR), packaged in MOVES default database. This paper presents a novel approach by adjusting the relativeMAR to eliminate the impact of MOVES source population on running exhaust emission and to keep start and evaporative emissions unchanged for both MOVES2010b and MOVES2014. Results from MOVES runs using this approach indicated significant improvements on VMT distribution and emission burden estimation for each MOVES source type. The deviation of VMT by MOVES source type is minimized by using this approach from 12% to less than 0.05% for MOVES2010b and from 50% to less than 0.2% for MOVES2014 except for MOVES source type 53. Source type 53 still remains about 30% variation. The improvement of VMT distribution results in the elimination of emission burden deviation for each MOVES source type. For MOVES2010b, the deviation of emission burdens decreases from -12% for particulate matter less than 2.5 μm (PM2.5) and -9% for carbon monoxide (CO) to less than 0.002%. For MOVES2014, it drops from 80% for CO and 97% for PM2.5 to 0.006%. This approach is developed to more accurately estimate the total emission burdens using EPA's MOVES, both MOVES2010b and MOVES2014, by redistributing vehicle mile traveled (VMT) by Highways Performance Monitoring System (HPMS) class to MOVES source type on the basis of comprehensive traffic study, local link-by-link VMT broken down into MOVES source type.
Optimizing a neural network for detection of moving vehicles in video
NASA Astrophysics Data System (ADS)
Fischer, Noëlle M.; Kruithof, Maarten C.; Bouma, Henri
2017-10-01
In the field of security and defense, it is extremely important to reliably detect moving objects, such as cars, ships, drones and missiles. Detection and analysis of moving objects in cameras near borders could be helpful to reduce illicit trading, drug trafficking, irregular border crossing, trafficking in human beings and smuggling. Many recent benchmarks have shown that convolutional neural networks are performing well in the detection of objects in images. Most deep-learning research effort focuses on classification or detection on single images. However, the detection of dynamic changes (e.g., moving objects, actions and events) in streaming video is extremely relevant for surveillance and forensic applications. In this paper, we combine an end-to-end feedforward neural network for static detection with a recurrent Long Short-Term Memory (LSTM) network for multi-frame analysis. We present a practical guide with special attention to the selection of the optimizer and batch size. The end-to-end network is able to localize and recognize the vehicles in video from traffic cameras. We show an efficient way to collect relevant in-domain data for training with minimal manual labor. Our results show that the combination with LSTM improves performance for the detection of moving vehicles.
75 FR 8036 - Monitor-Hot Creek Rangeland Project
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-23
... DEPARTMENT OF AGRICULTURE Forest Service Monitor-Hot Creek Rangeland Project AGENCY: Forest... Rangeland Project area. The analysis will determine if a change in management direction for livestock grazing is needed to move existing resource conditions within the Monitor-Hot Creek Rangeland Project area...
Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror
Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji
2017-01-01
This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385
Horemans, Henricus L. D.; Vegter, Riemer J. K.; de Groot, Sonja; Bussmann, Johannes B. J.; van der Woude, Lucas H. V.
2018-01-01
Background Hypoactive lifestyle contributes to the development of secondary complications and lower quality of life in wheelchair users. There is a need for objective and user-friendly physical activity monitors for wheelchair-dependent individuals in order to increase physical activity through self-monitoring, goal setting, and feedback provision. Objective To determine the validity of Activ8 Activity Monitors to 1) distinguish two classes of activities: independent wheelchair propulsion from other non-propulsive wheelchair-related activities 2) distinguish five wheelchair-related classes of activities differing by the movement intensity level: sitting in a wheelchair (hands may be moving but wheelchair remains stationary), maneuvering, and normal, high speed or assisted wheelchair propulsion. Methods Sixteen able-bodied individuals performed sixteen various standardized 60s-activities of daily living. Each participant was equipped with a set of two Activ8 Professional Activity Monitors, one at the right forearm and one at the right wheel. Task classification by the Active8 Monitors was validated using video recordings. For the overall agreement, sensitivity and positive predictive value, outcomes above 90% are considered excellent, between 70 and 90% good, and below 70% unsatisfactory. Results Division in two classes resulted in overall agreement of 82.1%, sensitivity of 77.7% and positive predictive value of 78.2%. 84.5% of total duration of all tasks was classified identically by Activ8 and based on the video material. Division in five classes resulted in overall agreement of 56.6%, sensitivity of 52.8% and positive predictive value of 51.9%. 59.8% of total duration of all tasks was classified identically by Activ8 and based on the video material. Conclusions Activ8 system proved to be suitable for distinguishing between active wheelchair propulsion and other non-propulsive wheelchair-related activities. The ability of the current system and algorithms to distinguish five various wheelchair-related activities is unsatisfactory. PMID:29641582
Multifrequency observations of the BL Lacertae object PKS 0537 - 441
NASA Technical Reports Server (NTRS)
Maraschi, L.; Treves, A.; Schwartz, D. A.; Tanzi, E. G.
1985-01-01
PKS 0537 - 441 was repeatedly observed in the UV band with the International Ultraviolet Explorer and in the X-ray with the Einstein Observatory. On September 27, 1980, simultaneous observations in the two bands were obtained. Near-infrared photometry preceding and following the simultaneous observations by about one month is available from the literature, as is radio monitoring at 408 and 5000 MHz. Comparison of the observed X-ray flux with that predicted by the standard synchrotron self-Compton formalism, with a source dimension deduced from radio variability at 5 GHz, indicates that this component of the radio emission must be moving at relativistic speed with an effective projected Doppler beaming factor of about 10.
Humans and Cattle: A Review of Bovine Zoonoses
Cardwell, Diana M.; Moeller, Robert B.; Gray, Gregory C.
2014-01-01
Abstract Infectious disease prevention and control has been among the top public health objectives during the last century. However, controlling disease due to pathogens that move between animals and humans has been challenging. Such zoonotic pathogens have been responsible for the majority of new human disease threats and a number of recent international epidemics. Currently, our surveillance systems often lack the ability to monitor the human–animal interface for emergent pathogens. Identifying and ultimately addressing emergent cross-species infections will require a “One Health” approach in which resources from public veterinary, environmental, and human health function as part of an integrative system. Here we review the epidemiology of bovine zoonoses from a public health perspective. PMID:24341911
ERIC Educational Resources Information Center
Saneyoshi, Ayako; Michimata, Chikashi
2009-01-01
Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to…
Enhanced data validation strategy of air quality monitoring network.
Harkat, Mohamed-Faouzi; Mansouri, Majdi; Nounou, Mohamed; Nounou, Hazem
2018-01-01
Quick validation and detection of faults in measured air quality data is a crucial step towards achieving the objectives of air quality networks. Therefore, the objectives of this paper are threefold: (i) to develop a modeling technique that can be used to predict the normal behavior of air quality variables and help provide accurate reference for monitoring purposes; (ii) to develop fault detection method that can effectively and quickly detect any anomalies in measured air quality data. For this purpose, a new fault detection method that is based on the combination of generalized likelihood ratio test (GLRT) and exponentially weighted moving average (EWMA) will be developed. GLRT is a well-known statistical fault detection method that relies on maximizing the detection probability for a given false alarm rate. In this paper, we propose to develop GLRT-based EWMA fault detection method that will be able to detect the changes in the values of certain air quality variables; (iii) to develop fault isolation and identification method that allows defining the fault source(s) in order to properly apply appropriate corrective actions. In this paper, reconstruction approach that is based on Midpoint-Radii Principal Component Analysis (MRPCA) model will be developed to handle the types of data and models associated with air quality monitoring networks. All air quality modeling, fault detection, fault isolation and reconstruction methods developed in this paper will be validated using real air quality data (such as particulate matter, ozone, nitrogen and carbon oxides measurement). Copyright © 2017 Elsevier Inc. All rights reserved.
Objective scoring of animal handling and stunning practices at slaughter plants.
Grandin, T
1998-01-01
To develop objective methods for monitoring animal welfare at slaughter plants to ensure compliance with the Humane Methods of Slaughter Act. Survey of existing procedures. 24 federally inspected slaughter plants. 6 variables evaluated at each plant were stunning efficacy, insensibility of animals hanging on the bleeding rail, vocalization, electric prod use, number of animals slipping, and number of animals falling. Of 11 beef plants, only 4 were able to render 95% of cattle insensible with a single shot from a captive-bolt stunner. Personnel at 7 of 11 plants placed the stunning wand correctly on 99% or more of pigs and sheep. At 4 beef plants, percentage of cattle prodded with an electric prod ranged from 5% at a plant at which handlers only prodded cattle that refused to move to 90% at another plant. Use of electric prods at 6 pork plants scored for prod use ranged from 15 to almost 100% of pigs. Percentage of cattle that vocalized during stunning and handling ranged from 1.1% at a plant at which electric prods were only used on cattle that refused to move to 32% at another plant at which electric prods were used on 90% of cattle and a restraint device was inappropriately used to apply excessive pressure. To obtain the most accurate assessment of animal welfare at slaughter plants, it is important to score all of the aforementioned variables.
Interaction of compass sensing and object-motion detection in the locust central complex.
Bockhorst, Tobias; Homberg, Uwe
2017-07-01
Goal-directed behavior is often complicated by unpredictable events, such as the appearance of a predator during directed locomotion. This situation requires adaptive responses like evasive maneuvers followed by subsequent reorientation and course correction. Here we study the possible neural underpinnings of such a situation in an insect, the desert locust. As in other insects, its sense of spatial orientation strongly relies on the central complex, a group of midline brain neuropils. The central complex houses sky compass cells that signal the polarization plane of skylight and thus indicate the animal's steering direction relative to the sun. Most of these cells additionally respond to small moving objects that drive fast sensory-motor circuits for escape. Here we investigate how the presentation of a moving object influences activity of the neurons during compass signaling. Cells responded in one of two ways: in some neurons, responses to the moving object were simply added to the compass response that had adapted during continuous stimulation by stationary polarized light. By contrast, other neurons disadapted, i.e., regained their full compass response to polarized light, when a moving object was presented. We propose that the latter case could help to prepare for reorientation of the animal after escape. A neuronal network based on central-complex architecture can explain both responses by slight changes in the dynamics and amplitudes of adaptation to polarized light in CL columnar input neurons of the system. NEW & NOTEWORTHY Neurons of the central complex in several insects signal compass directions through sensitivity to the sky polarization pattern. In locusts, these neurons also respond to moving objects. We show here that during polarized-light presentation, responses to moving objects override their compass signaling or restore adapted inhibitory as well as excitatory compass responses. A network model is presented to explain the variations of these responses that likely serve to redirect flight or walking following evasive maneuvers. Copyright © 2017 the American Physiological Society.
Saetta, Gianluca; Grond, Ilva; Brugger, Peter; Lenggenhager, Bigna; Tsay, Anthony J; Giummarra, Melita J
2018-03-21
Phantom limbs are the phenomenal persistence of postural and sensorimotor features of an amputated limb. Although immaterial, their characteristics can be modulated by the presence of physical matter. For instance, the phantom may disappear when its phenomenal space is invaded by objects ("obstacle shunning"). Alternatively, "obstacle tolerance" occurs when the phantom is not limited by the law of impenetrability and co-exists with physical objects. Here we examined the link between this under-investigated aspect of phantom limbs and apparent motion perception. The illusion of apparent motion of human limbs involves the perception that a limb moves through or around an object, depending on the stimulus onset asynchrony (SOA) for the two images. Participants included 12 unilateral lower limb amputees matched for obstacle shunning (n = 6) and obstacle tolerance (n = 6) experiences, and 14 non-amputees. Using multilevel linear models, we replicated robust biases for short perceived trajectories for short SOA (moving through the object), and long trajectories (circumventing the object) for long SOAs in both groups. Importantly, however, amputees with obstacle shunning perceived leg stimuli to predominantly move through the object, whereas amputees with obstacle tolerance perceived leg stimuli to predominantly move around the object. That is, in people who experience obstacle shunning, apparent motion perception of lower limbs was not constrained to the laws of impenetrability (as the phantom disappears when invaded by objects), and legs can therefore move through physical objects. Amputees who experience obstacle tolerance, however, had stronger solidity constraints for lower limb apparent motion, perhaps because they must avoid co-location of the phantom with physical objects. Phantom limb experience does, therefore, appear to be modulated by intuitive physics, but not in the same way for everyone. This may have important implications for limb experience post-amputation (e.g., improving prosthesis embodiment when limb representation is constrained by the same limits as an intact limb). Copyright © 2018 Elsevier Ltd. All rights reserved.
What are the underlying units of perceived animacy? Chasing detection is intrinsically object-based.
van Buren, Benjamin; Gao, Tao; Scholl, Brian J
2017-10-01
One of the most foundational questions that can be asked about any visual process is the nature of the underlying 'units' over which it operates (e.g., features, objects, or spatial regions). Here we address this question-for the first time, to our knowledge-in the context of the perception of animacy. Even simple geometric shapes appear animate when they move in certain ways. Do such percepts arise whenever any visual feature moves appropriately, or do they require that the relevant features first be individuated as discrete objects? Observers viewed displays in which one disc (the "wolf") chased another (the "sheep") among several moving distractor discs. Critically, two pairs of discs were also connected by visible lines. In the Unconnected condition, both lines connected pairs of distractors; but in the Connected condition, one connected the wolf to a distractor, and the other connected the sheep to a different distractor. Observers in the Connected condition were much less likely to describe such displays using mental state terms. Furthermore, signal detection analyses were used to explore the objective ability to discriminate chasing displays from inanimate control displays in which the wolf moved toward the sheep's mirror-image. Chasing detection was severely impaired on Connected trials: observers could readily detect an object chasing another object, but not a line-end chasing another line-end, a line-end chasing an object, or an object chasing a line-end. We conclude that the underlying units of perceived animacy are discrete visual objects.
Moving object detection in top-view aerial videos improved by image stacking
NASA Astrophysics Data System (ADS)
Teutsch, Michael; Krüger, Wolfgang; Beyerer, Jürgen
2017-08-01
Image stacking is a well-known method that is used to improve the quality of images in video data. A set of consecutive images is aligned by applying image registration and warping. In the resulting image stack, each pixel has redundant information about its intensity value. This redundant information can be used to suppress image noise, resharpen blurry images, or even enhance the spatial image resolution as done in super-resolution. Small moving objects in the videos usually get blurred or distorted by image stacking and thus need to be handled explicitly. We use image stacking in an innovative way: image registration is applied to small moving objects only, and image warping blurs the stationary background that surrounds the moving objects. Our video data are coming from a small fixed-wing unmanned aerial vehicle (UAV) that acquires top-view gray-value images of urban scenes. Moving objects are mainly cars but also other vehicles such as motorcycles. The resulting images, after applying our proposed image stacking approach, are used to improve baseline algorithms for vehicle detection and segmentation. We improve precision and recall by up to 0.011, which corresponds to a reduction of the number of false positive and false negative detections by more than 3 per second. Furthermore, we show how our proposed image stacking approach can be implemented efficiently.
Camouflage, detection and identification of moving targets
Hall, Joanna R.; Cuthill, Innes C.; Baddeley, Roland; Shohet, Adam J.; Scott-Samuel, Nicholas E.
2013-01-01
Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation—detection, identification and capture—in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely ‘break’ camouflage. PMID:23486439
Camouflage, detection and identification of moving targets.
Hall, Joanna R; Cuthill, Innes C; Baddeley, Roland; Shohet, Adam J; Scott-Samuel, Nicholas E
2013-05-07
Nearly all research on camouflage has investigated its effectiveness for concealing stationary objects. However, animals have to move, and patterns that only work when the subject is static will heavily constrain behaviour. We investigated the effects of different camouflages on the three stages of predation-detection, identification and capture-in a computer-based task with humans. An initial experiment tested seven camouflage strategies on static stimuli. In line with previous literature, background-matching and disruptive patterns were found to be most successful. Experiment 2 showed that if stimuli move, an isolated moving object on a stationary background cannot avoid detection or capture regardless of the type of camouflage. Experiment 3 used an identification task and showed that while camouflage is unable to slow detection or capture, camouflaged targets are harder to identify than uncamouflaged targets when similar background objects are present. The specific details of the camouflage patterns have little impact on this effect. If one has to move, camouflage cannot impede detection; but if one is surrounded by similar targets (e.g. other animals in a herd, or moving background distractors), then camouflage can slow identification. Despite previous assumptions, motion does not entirely 'break' camouflage.
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
1993-01-01
A Linear Motion Encoding device for measuring the linear motion of a moving object is disclosed in which a light source is mounted on the moving object and a position sensitive detector such as an array photodetector is mounted on a nearby stationary object. The light source emits a light beam directed towards the array photodetector such that a light spot is created on the array. An analog-to-digital converter, connected to the array photodetector is used for reading the position of the spot on the array photodetector. A microprocessor and memory is connected to the analog-to-digital converter to hold and manipulate data provided by the analog-to-digital converter on the position of the spot and to compute the linear displacement of the moving object based upon the data from the analog-to-digital converter.
Image analysis of multiple moving wood pieces in real time
NASA Astrophysics Data System (ADS)
Wang, Weixing
2006-02-01
This paper presents algorithms for image processing and image analysis of wood piece materials. The algorithms were designed for auto-detection of wood piece materials on a moving conveyor belt or a truck. When wood objects on moving, the hard task is to trace the contours of the objects in n optimal way. To make the algorithms work efficiently in the plant, a flexible online system was designed and developed, which mainly consists of image acquisition, image processing, object delineation and analysis. A number of newly-developed algorithms can delineate wood objects with high accuracy and high speed, and in the wood piece analysis part, each wood piece can be characterized by a number of visual parameters which can also be used for constructing experimental models directly in the system.
How Many Objects are You Worth? Quantification of the Self-Motion Load on Multiple Object Tracking
Thomas, Laura E.; Seiffert, Adriane E.
2011-01-01
Perhaps walking and chewing gum is effortless, but walking and tracking moving objects is not. Multiple object tracking is impaired by walking from one location to another, suggesting that updating location of the self puts demands on object tracking processes. Here, we quantified the cost of self-motion in terms of the tracking load. Participants in a virtual environment tracked a variable number of targets (1–5) among distractors while either staying in one place or moving along a path that was similar to the objects’ motion. At the end of each trial, participants decided whether a probed dot was a target or distractor. As in our previous work, self-motion significantly impaired performance in tracking multiple targets. Quantifying tracking capacity for each individual under move versus stay conditions further revealed that self-motion during tracking produced a cost to capacity of about 0.8 (±0.2) objects. Tracking your own motion is worth about one object, suggesting that updating the location of the self is similar, but perhaps slightly easier, than updating locations of objects. PMID:21991259
ERIC Educational Resources Information Center
Preston, Christine
2018-01-01
If you think physics is only for older children, think again. Much of the playtime of young children is filled with exploring--and wondering about and informally investigating--the way objects, especially toys, move. How forces affect objects, including: change in position, motion, and shape are fundamental to the big ideas in physics. This…
ERIC Educational Resources Information Center
Trundle, Kathy Cabe; Smith, Mandy McCormick
2011-01-01
Some of children's earliest explorations focus on movement of their own bodies. Quickly, children learn to further explore movement by using objects like a ball or car. They recognize that a ball moves differently than a pushed block. As they grow, children enjoy their experiences with motion and movement, including making objects move, changing…
An elementary research on wireless transmission of holographic 3D moving pictures
NASA Astrophysics Data System (ADS)
Takano, Kunihiko; Sato, Koki; Endo, Takaya; Asano, Hiroaki; Fukuzawa, Atsuo; Asai, Kikuo
2009-05-01
In this paper, a transmitting process of a sequence of holograms describing 3D moving objects over the communicating wireless-network system is presented. A sequence of holograms involves holograms is transformed into a bit stream data, and then it is transmitted over the wireless LAN and Bluetooth. It is shown that applying this technique, holographic data of 3D moving object is transmitted in high quality and a relatively good reconstruction of holographic images is performed.
Two visual systems in monitoring of dynamic traffic: effects of visual disruption.
Zheng, Xianjun Sam; McConkie, George W
2010-05-01
Studies from neurophysiology and neuropsychology provide support for two separate object- and location-based visual systems, ventral and dorsal. In the driving context, a study was conducted using a change detection paradigm to explore drivers' ability to monitor the dynamic traffic flow, and the effects of visual disruption on these two visual systems. While driving, a discrete change, such as vehicle location, color, or identity, was occasionally made in one of the vehicles on the road ahead of the driver. Experiment results show that without visual disruption, all changes were detected very well; yet, these equally perceivable changes were disrupted differently by a brief blank display (150 ms): the detection of location changes was especially reduced. The disruption effects were also bigger for the parked vehicle compared to the moving ones. The findings support the different roles for two visual systems in monitoring the dynamic traffic: the "where", dorsal system, tracks vehicle spatiotemporal information on perceptual level, encoding information in a coarse and transient manner; whereas the "what", ventral system, monitors vehicles' featural information, encoding information more accurately and robustly. Both systems work together contributing to the driver's situation awareness of traffic. Benefits and limitations of using the driving simulation are also discussed. Copyright (c) 2009 Elsevier Ltd. All rights reserved.
Perceived shifts of flashed stimuli by visible and invisible object motion.
Watanabe, Katsumi; Sato, Takashi R; Shimojo, Shinsuke
2003-01-01
Perceived positions of flashed stimuli can be altered by motion signals in the visual field-position capture (Whitney and Cavanagh, 2000 Nature Neuroscience 3 954-959). We examined whether position capture of flashed stimuli depends on the spatial relationship between moving and flashed stimuli, and whether the phenomenal permanence of a moving object behind an occluding surface (tunnel effect; Michotte 1950 Acta Psychologica 7 293-322) can produce position capture. Observers saw two objects (circles) moving vertically in opposite directions, one in each visual hemifield. Two horizontal bars were simultaneously flashed at horizontally collinear positions with the fixation point at various timings. When the movement of the object was fully visible, the flashed bar appeared shifted in the motion direction of the circle. But this position-capture effect occurred only when the bar was presented ahead of or on the moving circle. Even when the motion trajectory was covered by an opaque surface and the bar was flashed after complete occlusion of the circle, the position-capture effect was still observed, though the positional asymmetry was less clear. These results show that movements of both visible and 'hidden' objects can modulate the perception of positions of flashed stimuli and suggest that a high-level representation of 'objects in motion' plays an important role in the position-capture effect.
Martyniuk, Christopher J
2018-04-01
Environmental science has benefited a great deal from omics-based technologies. High-throughput toxicology has defined adverse outcome pathways (AOPs), prioritized chemicals of concern, and identified novel actions of environmental chemicals. While many of these approaches are conducted under rigorous laboratory conditions, a significant challenge has been the interpretation of omics data in "real-world" exposure scenarios. Clarity in the interpretation of these data limits their use in environmental monitoring programs. In recent years, one overarching objective of many has been to address fundamental questions concerning experimental design and the robustness of data collected under the broad umbrella of environmental genomics. These questions include: (1) the likelihood that molecular profiles return to a predefined baseline level following remediation efforts, (2) how reference site selection in an urban environment influences interpretation of omics data and (3) what is the most appropriate species to monitor in the environment from an omics point of view. In addition, inter-genomics studies have been conducted to assess transcriptome reproducibility in toxicology studies. One lesson learned from inter-genomics studies is that there are core molecular networks that can be identified by multiple laboratories using the same platform. This supports the idea that "omics-networks" defined a priori may be a viable approach moving forward for evaluating environmental impacts over time. Both spatial and temporal variability in ecosystem structure is expected to influence molecular responses to environmental stressors, and it is important to recognize how these variables, as well as individual factor (i.e. sex, age, maturation), may confound interpretation of network responses to chemicals. This mini-review synthesizes the progress made towards adopting these tools into environmental monitoring and identifies future challenges to be addressed, as we move into the next era of high throughput sequencing. A conceptual framework for validating and incorporating molecular networks into environmental monitoring programs is proposed. As AOPs become more defined and their potential in environmental monitoring assessments becomes more recognized, the AOP framework may prove to be the conduit between omics and penultimate ecological responses for environmental risk assessments. Copyright © 2018 Elsevier B.V. All rights reserved.
Toward a national animal telemetry network for aquatic observations in the United States
Block, Barbara A.; Holbrook, Christopher; Simmons, Samantha E; Holland, Kim N; Ault, Jerald S.; Costa, Daniel P.; Mate, Bruce R; Seitz, Andrew C.; Arendt, Michael D.; Payne, John; Mahmoudi, Behzad; Moore, Peter L.; Price, James; J. J. Levenson,; Wilson, Doug; Kochevar, Randall E
2016-01-01
Animal telemetry is the science of elucidating the movements and behavior of animals in relation to their environment or habitat. Here, we focus on telemetry of aquatic species (marine mammals, sharks, fish, sea birds and turtles) and so are concerned with animal movements and behavior as they move through and above the world’s oceans, coastal rivers, estuaries and great lakes. Animal telemetry devices (“tags”) yield detailed data regarding animal responses to the coupled ocean–atmosphere and physical environment through which they are moving. Animal telemetry has matured and we describe a developing US Animal Telemetry Network (ATN) observing system that monitors aquatic life on a range of temporal and spatial scales that will yield both short- and long-term benefits, fill oceanographic observing and knowledge gaps and advance many of the U.S. National Ocean Policy Priority Objectives. ATN has the potential to create a huge impact for the ocean observing activities undertaken by the U.S. Integrated Ocean Observing System (IOOS) and become a model for establishing additional national-level telemetry networks worldwide.
Contrasting Specializations for Facial Motion Within the Macaque Face-Processing System
Fisher, Clark; Freiwald, Winrich A.
2014-01-01
SUMMARY Facial motion transmits rich and ethologically vital information [1, 2], but how the brain interprets this complex signal is poorly understood. Facial form is analyzed by anatomically distinct face patches in the macaque brain [3, 4], and facial motion activates these patches and surrounding areas [5, 6]. Yet it is not known whether facial motion is processed by its own distinct and specialized neural machinery, and if so, what that machinery’s organization might be. To address these questions, we used functional magnetic resonance imaging (fMRI) to monitor the brain activity of macaque monkeys while they viewed low- and high-level motion and form stimuli. We found that, beyond classical motion areas and the known face patch system, moving faces recruited a heretofore-unrecognized face patch. Although all face patches displayed distinctive selectivity for face motion over object motion, only two face patches preferred naturally moving faces, while three others preferred randomized, rapidly varying sequences of facial form. This functional divide was anatomically specific, segregating dorsal from ventral face patches, thereby revealing a new organizational principle of the macaque face-processing system. PMID:25578903
An automated data exploitation system for airborne sensors
NASA Astrophysics Data System (ADS)
Chen, Hai-Wen; McGurr, Mike
2014-06-01
Advanced wide area persistent surveillance (WAPS) sensor systems on manned or unmanned airborne vehicles are essential for wide-area urban security monitoring in order to protect our people and our warfighter from terrorist attacks. Currently, human (imagery) analysts process huge data collections from full motion video (FMV) for data exploitation and analysis (real-time and forensic), providing slow and inaccurate results. An Automated Data Exploitation System (ADES) is urgently needed. In this paper, we present a recently developed ADES for airborne vehicles under heavy urban background clutter conditions. This system includes four processes: (1) fast image registration, stabilization, and mosaicking; (2) advanced non-linear morphological moving target detection; (3) robust multiple target (vehicles, dismounts, and human) tracking (up to 100 target tracks); and (4) moving or static target/object recognition (super-resolution). Test results with real FMV data indicate that our ADES can reliably detect, track, and recognize multiple vehicles under heavy urban background clutters. Furthermore, our example shows that ADES as a baseline platform can provide capability for vehicle abnormal behavior detection to help imagery analysts quickly trace down potential threats and crimes.
A distributed fluid level sensor suitable for monitoring fuel load on board a moving fuel tank
NASA Astrophysics Data System (ADS)
Arkwright, John W.; Parkinson, Luke A.; Papageorgiou, Anthony W.
2018-02-01
A temperature insensitive fiber Bragg grating sensing array has been developed for monitoring fluid levels in a moving tank. The sensors are formed from two optical fibers twisted together to form a double helix with pairs of fiber Bragg gratings located above one another at the points where the fibers are vertically disposed. The sensing mechanism is based on a downwards deflection of the section of the double helix containing the FBGs which causes the tension in the upper FBG to decrease and the tension in the lower FBG to increase with concomitant changes in Bragg wavelength in each FBG. Changes in ambient temperature cause a common mode increase in Bragg wavelength, thus monitoring the differential change in wavelength provides a temperature independent measure of the applied pressure. Ambient temperature can be monitored simultaneously by taking the average wavelength of the upper and lower FBGs. The sensors are able to detect variations in pressure with resolutions better than 1 mmH2O and when placed on the bottom of a tank can be used to monitor fluid level based on the recorded pressure. Using an array of these sensors located along the bottom of a moving tank it was possible to monitor the fluid level at multiple points and hence dynamically track the total fluid volume in the tank. The outer surface of the sensing array is formed from a thin continuous Teflon sleeve, making it suitable for monitoring the level of volatile fluids such as aviation fuel and gasoline.
NASA Astrophysics Data System (ADS)
Krauß, T.
2014-11-01
The focal plane assembly of most pushbroom scanner satellites is built up in a way that different multispectral or multispectral and panchromatic bands are not all acquired exactly at the same time. This effect is due to offsets of some millimeters of the CCD-lines in the focal plane. Exploiting this special configuration allows the detection of objects moving during this small time span. In this paper we present a method for automatic detection and extraction of moving objects - mainly traffic - from single very high resolution optical satellite imagery of different sensors. The sensors investigated are WorldView-2, RapidEye, Pléiades and also the new SkyBox satellites. Different sensors require different approaches for detecting moving objects. Since the objects are mapped on different positions only in different spectral bands also the change of spectral properties have to be taken into account. In case the main distance in the focal plane is between the multispectral and the panchromatic CCD-line like for Pléiades an approach for weighted integration to receive mostly identical images is investigated. Other approaches for RapidEye and WorldView-2 are also shown. From these intermediate bands difference images are calculated and a method for detecting the moving objects from these difference images is proposed. Based on these presented methods images from different sensors are processed and the results are assessed for detection quality - how many moving objects can be detected, how many are missed - and accuracy - how accurate is the derived speed and size of the objects. Finally the results are discussed and an outlook for possible improvements towards operational processing is presented.
Liang, Zhongwei; Zhou, Liang; Liu, Xiaochu; Wang, Xiaogang
2014-01-01
It is obvious that tablet image tracking exerts a notable influence on the efficiency and reliability of high-speed drug mass production, and, simultaneously, it also emerges as a big difficult problem and targeted focus during production monitoring in recent years, due to the high similarity shape and random position distribution of those objectives to be searched for. For the purpose of tracking tablets accurately in random distribution, through using surface fitting approach and transitional vector determination, the calibrated surface of light intensity reflective energy can be established, describing the shape topology and topography details of objective tablet. On this basis, the mathematical properties of these established surfaces have been proposed, and thereafter artificial neural network (ANN) has been employed for classifying those moving targeted tablets by recognizing their different surface properties; therefore, the instantaneous coordinate positions of those drug tablets on one image frame can then be determined. By repeating identical pattern recognition on the next image frame, the real-time movements of objective tablet templates were successfully tracked in sequence. This paper provides reliable references and new research ideas for the real-time objective tracking in the case of drug production practices. PMID:25143781
X-ray fluorescence camera for imaging of iodine media in vivo.
Matsukiyo, Hiroshi; Watanabe, Manabu; Sato, Eiichi; Osawa, Akihiro; Enomoto, Toshiyuki; Nagao, Jiro; Abderyim, Purkhet; Aizawa, Katsuo; Tanaka, Etsuro; Mori, Hidezo; Kawai, Toshiaki; Ehara, Shigeru; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun
2009-01-01
X-ray fluorescence (XRF) analysis is useful for measuring density distributions of contrast media in vivo. An XRF camera was developed for carrying out mapping for iodine-based contrast media used in medical angiography. Objects are exposed by an X-ray beam from a cerium target. Cerium K-series X-rays are absorbed effectively by iodine media in objects, and iodine fluorescence is produced from the objects. Next, iodine Kalpha fluorescence is selected out by use of a 58-microm-thick stannum filter and is detected by a cadmium telluride (CdTe) detector. The Kalpha rays are discriminated out by a multichannel analyzer, and the number of photons is counted by a counter card. The objects are moved and scanned by an x-y stage in conjunction with a two-stage controller, and X-ray images obtained by iodine mapping are shown on a personal computer monitor. The scan pitch of the x and y axes was 2.5 mm, and the photon counting time per mapping point was 2.0 s. We carried out iodine mapping of non-living animals (phantoms), and iodine Kalpha fluorescence was produced from weakly remaining iodine elements in a rabbit skin cancer.
NASA Astrophysics Data System (ADS)
Artigao, Estefania; Honrubia-Escribano, Andres; Gomez-Lazaro, Emilio
2017-11-01
Operation and maintenance (O&M) of wind turbines is recently becoming the spotlight in the wind energy sector. While wind turbine power capacities continue to increase and new offshore developments are being installed, O&M costs keep raising. With the objective of reducing such costs, the new trends are moving from corrective and preventive maintenance toward predictive actions. In this scenario, condition monitoring (CM) has been identified as the key to achieve this goal. The induction generator of a wind turbine is a major contributor to failure rates and downtime where doubly-fed induction generators (DFIG) are the dominant technology employed in variable speed wind turbines. The current work presents the analysis of an in-service DFIG. A one-year measurement campaign has been used to perform the study. Several signal processing techniques have been applied and the optimal method for CM has been identified. A diagnosis has been reached, the DFIG under study shows potential gearbox damage.
Bachmann, Talis; Murd, Carolina; Põder, Endel
2012-09-01
One fundamental property of the perceptual and cognitive systems is their capacity for prediction in the dynamic environment; the flash-lag effect has been considered as a particularly suggestive example of this capacity (Nijhawan in nature 370:256-257, 1994, Behav brain sci 31:179-239, 2008). Thus, because of involvement of the mechanisms of extrapolation and visual prediction, the moving object is perceived ahead of the simultaneously flashed static object objectively aligned with the moving one. In the present study we introduce a new method and report experimental results inconsistent with at least some versions of the prediction/extrapolation theory. We show that a stimulus moving in the opposite direction to the reference stimulus by approaching it before the flash does not diminish the flash-lag effect, but rather augments it. In addition, alternative theories (in)capable of explaining this paradoxical result are discussed.
Alternatives to the Moving Average
Paul C. van Deusen
2001-01-01
There are many possible estimators that could be used with annual inventory data. The 5-year moving average has been selected as a default estimator to provide initial results for states having available annual inventory data. User objectives for these estimates are discussed. The characteristics of a moving average are outlined. It is shown that moving average...
Optimal Path Determination for Flying Vehicle to Search an Object
NASA Astrophysics Data System (ADS)
Heru Tjahjana, R.; Heri Soelistyo U, R.; Ratnasari, L.; Irawanto, B.
2018-01-01
In this paper, a method to determine optimal path for flying vehicle to search an object is proposed. Background of the paper is controlling air vehicle to search an object. Optimal path determination is one of the most popular problem in optimization. This paper describe model of control design for a flying vehicle to search an object, and focus on the optimal path that used to search an object. In this paper, optimal control model is used to control flying vehicle to make the vehicle move in optimal path. If the vehicle move in optimal path, then the path to reach the searched object also optimal. The cost Functional is one of the most important things in optimal control design, in this paper the cost functional make the air vehicle can move as soon as possible to reach the object. The axis reference of flying vehicle uses N-E-D (North-East-Down) coordinate system. The result of this paper are the theorems which say that the cost functional make the control optimal and make the vehicle move in optimal path are proved analytically. The other result of this paper also shows the cost functional which used is convex. The convexity of the cost functional is use for guarantee the existence of optimal control. This paper also expose some simulations to show an optimal path for flying vehicle to search an object. The optimization method which used to find the optimal control and optimal path vehicle in this paper is Pontryagin Minimum Principle.
Exploring Dance Movement Data Using Sequence Alignment Methods
Chavoshi, Seyed Hossein; De Baets, Bernard; Neutens, Tijs; De Tré, Guy; Van de Weghe, Nico
2015-01-01
Despite the abundance of research on knowledge discovery from moving object databases, only a limited number of studies have examined the interaction between moving point objects in space over time. This paper describes a novel approach for measuring similarity in the interaction between moving objects. The proposed approach consists of three steps. First, we transform movement data into sequences of successive qualitative relations based on the Qualitative Trajectory Calculus (QTC). Second, sequence alignment methods are applied to measure the similarity between movement sequences. Finally, movement sequences are grouped based on similarity by means of an agglomerative hierarchical clustering method. The applicability of this approach is tested using movement data from samba and tango dancers. PMID:26181435
Velmurugan, J.; Mirkin, M. V.; Svirsky, M. A.; Lalwani, A. K.; Llinas, R. R.
2014-01-01
A growing number of minimally invasive surgical and diagnostic procedures require the insertion of an optical, mechanical, or electronic device in narrow spaces inside a human body. In such procedures, precise motion control is essential to avoid damage to the patient’s tissues and/or the device itself. A typical example is the insertion of a cochlear implant which should ideally be done with minimum physical contact between the moving device and the cochlear canal walls or the basilar membrane. Because optical monitoring is not possible, alternative techniques for sub millimeter-scale distance control can be very useful for such procedures. The first requirement for distance control is distance sensing. We developed a novel approach to distance sensing based on the principles of scanning electrochemical microscopy (SECM). The SECM signal, i.e., the diffusion current to a microelectrode, is very sensitive to the distance between the probe surface and any electrically insulating object present in its proximity. With several amperometric microprobes fabricated on the surface of an insertable device, one can monitor the distances between different parts of the moving implant and the surrounding tissues. Unlike typical SECM experiments, in which a disk-shaped tip approaches a relatively smooth sample, complex geometries of the mobile device and its surroundings make distance sensing challenging. Additional issues include the possibility of electrode surface contamination in biological fluids and the requirement for a biologically compatible redox mediator. PMID:24845292
Evidence against a speed limit in multiple-object tracking.
Franconeri, S L; Lin, J Y; Pylyshyn, Z W; Fisher, B; Enns, J T
2008-08-01
Everyday tasks often require us to keep track of multiple objects in dynamic scenes. Past studies show that tracking becomes more difficult as objects move faster. In the present study, we show that this trade-off may not be due to increased speed itself but may, instead, be due to the increased crowding that usually accompanies increases in speed. Here, we isolate changes in speed from variations in crowding, by projecting a tracking display either onto a small area at the center of a hemispheric projection dome or onto the entire dome. Use of the larger display increased retinal image size and object speed by a factor of 4 but did not increase interobject crowding. Results showed that tracking accuracy was equally good in the large-display condition, even when the objects traveled far into the visual periphery. Accuracy was also not reduced when we tested object speeds that limited performance in the small-display condition. These results, along with a reinterpretation of past studies, suggest that we might be able to track multiple moving objects as fast as we can a single moving object, once the effect of object crowding is eliminated.
Comparisons of MOVES Light-duty Gasoline NOx Emission Rates with Real-world Measurements
NASA Astrophysics Data System (ADS)
Choi, D.; Sonntag, D.; Warila, J.
2017-12-01
Recent studies have shown differences between air quality model estimates and monitored values for nitrogen oxides. Several studies have suggested that the discrepancy between monitored and modeled values is due to an overestimation of NOx from mobile sources in EPA's emission inventory, particularly for light-duty gasoline vehicles. EPA's MOtor Vehicle Emission Simulator (MOVES) is an emission modeling system that estimates emissions for cars, trucks and other mobile sources at the national, county, and project level for criteria pollutants, greenhouse gases, and air toxics. Studies that directly measure vehicle emissions provide useful data for evaluating MOVES when the measurement conditions are properly accounted for in modeling. In this presentation, we show comparisons of MOVES2014 to thousands of real-world NOx emissions measurements from individual light-duty gasoline vehicles. The comparison studies include in-use vehicle emissions tests conducted on chassis dynamometer tests in support of Denver, Colorado's Vehicle Inspection & Maintenance Program and remote sensing data collected using road-side instruments in multiple locations and calendar years in the United States. In addition, we conduct comparisons of MOVES predictions to fleet-wide emissions measured from tunnels. We also present details on the methodology used to conduct the MOVES model runs in comparing to the independent data.
ERIC Educational Resources Information Center
McCloskey, Michael; And Others
Through everyday experience people acquire knowledge about how moving objects behave. For example, if a rock is thrown up into the air, it will fall back to earth. Research has shown that people's ideas about why moving objects behave as they do are often quite inconsistent with the principles of classical mechanics. In fact, many people hold a…
ERIC Educational Resources Information Center
Hecht, Eugene
2015-01-01
Anyone who has taught introductory physics should know that roughly a third of the students initially believe that any object at rest will remain at rest, whereas any moving body not propelled by applied forces will promptly come to rest. Likewise, about half of those uninitiated students believe that any object moving at a constant speed must be…
Needham, Amy; Cantlon, Jessica F; Ormsbee Holley, Susan M
2006-12-01
The current research investigates infants' perception of a novel object from a category that is familiar to young infants: key rings. We ask whether experiences obtained outside the lab would allow young infants to parse the visible portions of a partly occluded key ring display into one single unit, presumably as a result of having categorized it as a key ring. This categorization was marked by infants' perception of the keys and ring as a single unit that should move together, despite their attribute differences. We showed infants a novel key ring display in which the keys and ring moved together as one rigid unit (Move-together event) or the ring moved but the keys remained stationary throughout the event (Move-apart event). Our results showed that 8.5-month-old infants perceived the keys and ring as connected despite their attribute differences, and that their perception of object unity was eliminated as the distinctive attributes of the key ring were removed. When all of the distinctive attributes of the key ring were removed, the 8.5-month-old infants perceived the display as two separate units, which is how younger infants (7-month-old) perceived the key ring display with all its distinctive attributes unaltered. These results suggest that on the basis of extensive experience with an object category, infants come to identify novel members of that category and expect them to possess the attributes typical of that category.
Velocity measurement by vibro-acoustic Doppler.
Nabavizadeh, Alireza; Urban, Matthew W; Kinnick, Randall R; Fatemi, Mostafa
2012-04-01
We describe the theoretical principles of a new Doppler method, which uses the acoustic response of a moving object to a highly localized dynamic radiation force of the ultrasound field to calculate the velocity of the moving object according to Doppler frequency shift. This method, named vibro-acoustic Doppler (VAD), employs two ultrasound beams separated by a slight frequency difference, Δf, transmitting in an X-focal configuration. Both ultrasound beams experience a frequency shift because of the moving objects and their interaction at the joint focal zone produces an acoustic frequency shift occurring around the low-frequency (Δf) acoustic emission signal. The acoustic emission field resulting from the vibration of the moving object is detected and used to calculate its velocity. We report the formula that describes the relation between Doppler frequency shift of the emitted acoustic field and the velocity of the moving object. To verify the theory, we used a string phantom. We also tested our method by measuring fluid velocity in a tube. The results show that the error calculated for both string and fluid velocities is less than 9.1%. Our theory shows that in the worst case, the error is 0.54% for a 25° angle variation for the VAD method compared with an error of -82.6% for a 25° angle variation for a conventional continuous wave Doppler method. An advantage of this method is that, unlike conventional Doppler, it is not sensitive to angles between the ultrasound beams and direction of motion.
Central Alaska Network vital signs monitoring plan
MacCluskie, Margaret C.; Oakley, Karen L.; McDonald, Trent; Wilder, Doug
2005-01-01
Denali National Park and Preserve, Wrangell-St. Elias National Park and Preserve, and Yukon-Charley Rivers National Preserve have been organized into the Central Alaska Network (CAKN) for the purposes of carrying out ecological monitoring activities under the National Park Services’ Vital Signs Monitoring program. The Phase III Report is the initial draft of the Vital Signs Monitoring Plan for the Central Alaska Network. It includes updated material from the Phase I and II documents. This report, and draft protocols for 11 of the network’s Vital Signs, were peer reviewed early in 2005. Review comments were incorporated into the document bringing the network to the final stage of having a Vital Signs Monitoring Plan. Implementation of the program will formally begin in FY 2006. The broad goals of the CAKN monitoring program are to: (1) better understand the dynamic nature and condition of park ecosystems; and (2) provide reference points for comparisons with other, altered environments. The focus of the CAKN program will be to monitor ecosystems in order to detect change in ecological components and in the relationships among the components. Water quality monitoring is fully integrated within the CAKN monitoring program. A monitoring program for lentic (non-moving water) has been determined, and the program for lotic systems (moving water) is under development.
Development and application of virtual reality for man/systems integration
NASA Technical Reports Server (NTRS)
Brown, Marcus
1991-01-01
While the graphical presentation of computer models signified a quantum leap over presentations limited to text and numbers, it still has the problem of presenting an interface barrier between the human user and the computer model. The user must learn a command language in order to orient themselves in the model. For example, to move left from the current viewpoint of the model, they might be required to type 'LEFT' at a keyboard. This command is fairly intuitive, but if the viewpoint moves far enough that there are no visual cues overlapping with the first view, the user does not know if the viewpoint has moved inches, feet, or miles to the left, or perhaps remained in the same position, but rotated to the left. Until the user becomes quite familiar with the interface language of the computer model presentation, they will be proned to lossing their bearings frequently. Even a highly skilled user will occasionally get lost in the model. A new approach to presenting type type of information is to directly interpret the user's body motions as the input language for determining what view to present. When the user's head turns 45 degrees to the left, the viewpoint should be rotated 45 degrees to the left. Since the head moves through several intermediate angles between the original view and the final one, several intermediate views should be presented, providing the user with a sense of continuity between the original view and the final one. Since the primary way a human physically interacts with their environment should monitor the movements of the user's hands and alter objects in the virtual model in a way consistent with the way an actual object would move when manipulated using the same hand movements. Since this approach to the man-computer interface closely models the same type of interface that humans have with the physical world, this type of interface is often called virtual reality, and the model is referred to as a virtual world. The task of this summer fellowship was to set up a virtual reality system at MSFC and begin applying it to some of the questions which concern scientists and engineers involved in space flight. A brief discussion of this work is presented.
Bioluminescence Monitoring of Neuronal Activity in Freely Moving Zebrafish Larvae
Knafo, Steven; Prendergast, Andrew; Thouvenin, Olivier; Figueiredo, Sophie Nunes; Wyart, Claire
2017-01-01
The proof of concept for bioluminescence monitoring of neural activity in zebrafish with the genetically encoded calcium indicator GFP-aequorin has been previously described (Naumann et al., 2010) but challenges remain. First, bioluminescence signals originating from a single muscle fiber can constitute a major pitfall. Second, bioluminescence signals emanating from neurons only are very small. To improve signals while verifying specificity, we provide an optimized 4 steps protocol achieving: 1) selective expression of a zebrafish codon-optimized GFP-aequorin, 2) efficient soaking of larvae in GFP-aequorin substrate coelenterazine, 3) bioluminescence monitoring of neural activity from motor neurons in free-tailed moving animals performing acoustic escapes and 4) verification of the absence of muscle expression using immunohistochemistry. PMID:29130058
The emotional effects of violations of causality, or How to make a square amusing
Bressanelli, Daniela; Parovel, Giulia
2012-01-01
In Michotte's launching paradigm a square moves up to and makes contact with another square, which then moves off more slowly. In the triggering effect, the second square moves much faster than the first, eliciting an amusing impression. We generated 13 experimental displays in which there was always incongruity between cause and effect. We hypothesized that the comic impression would be stronger when objects are perceived as living agents and weaker when objects are perceived as mechanically non-animated. General findings support our hypothesis. PMID:23145274
VizieR Online Data Catalog: Catalog of Suspected Nearby Young Stars (Riedel+, 2017)
NASA Astrophysics Data System (ADS)
Riedel, A. R.; Blunt, S. C.; Lambrides, E. L.; Rice, E. L.; Cruz, K. L.; Faherty, J. K.
2018-04-01
LocAting Constituent mEmbers In Nearby Groups (LACEwING) is a frequentist observation space kinematic moving group identification code. Using the spatial and kinematic information available about a target object (α, δ, Dist, μα, μδ, and γ), it determines the probability that the object is a member of each of the known nearby young moving groups (NYMGs). As with other moving group identification codes, LACEwING is capable of estimating memberships for stars with incomplete kinematic and spatial information. (2 data files).
RFID Technology for Continuous Monitoring of Physiological Signals in Small Animals.
Volk, Tobias; Gorbey, Stefan; Bhattacharyya, Mayukh; Gruenwald, Waldemar; Lemmer, Björn; Reindl, Leonhard M; Stieglitz, Thomas; Jansen, Dirk
2015-02-01
Telemetry systems enable researchers to continuously monitor physiological signals in unrestrained, freely moving small rodents. Drawbacks of common systems are limited operation time, the need to house the animals separately, and the necessity of a stable communication link. Furthermore, the costs of the typically proprietary telemetry systems reduce the acceptance. The aim of this paper is to introduce a low-cost telemetry system based on common radio frequency identification technology optimized for battery-independent operational time, good reusability, and flexibility. The presented implant is equipped with sensors to measure electrocardiogram, arterial blood pressure, and body temperature. The biological signals are transmitted as digital data streams. The device is able of monitoring several freely moving animals housed in groups with a single reader station. The modular concept of the system significantly reduces the costs to monitor multiple physiological functions and refining procedures in preclinical research.
Zurauskas, Mantas; Bradu, Adrian; Ferguson, Daniel R; Hammer, Daniel X; Podoleanu, Adrian
2016-03-01
This paper presents a novel instrument for biosciences, useful for studies of moving embryos. A dual sequential imaging/measurement channel is assembled via a closed-loop tracking architecture. The dual channel system can operate in two regimes: (i) single-point Doppler signal monitoring or (ii) fast 3-D swept source OCT imaging. The system is demonstrated for characterizing cardiac dynamics in Drosophila melanogaster larva. Closed loop tracking enables long term in vivo monitoring of the larvae heart without anesthetic or physical restraint. Such an instrument can be used to measure subtle variations in the cardiac behavior otherwise obscured by the larvae movements. A fruit fly larva (top) was continuously tracked for continuous remote monitoring. A heartbeat trace of freely moving larva (bottom) was obtained by a low coherence interferometry based doppler sensing technique. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
An Automatic Technique for Finding Faint Moving Objects in Wide Field CCD Images
NASA Astrophysics Data System (ADS)
Hainaut, O. R.; Meech, K. J.
1996-09-01
The traditional method used to find moving objects in astronomical images is to blink pairs or series of frames after registering them to align the background objects. While this technique is extremely efficient in terms of the low signal-to-noise ratio that the human sight can detect, it proved to be extremely time-, brain- and eyesight-consuming. The wide-field images provided by the large CCD mosaic recently built at IfA cover a field of view of 20 to 30' over 8192(2) pixels. Blinking such images is an enormous task, comparable to that of blinking large photographic plates. However, as the data are available digitally (each image occupying 260Mb of disk space), we are developing a set of computer codes to perform the moving object identification in sets of frames. This poster will describe the techniques we use in order to reach a detection efficiency as good as that of a human blinker; the main steps are to find all the objects in each frame (for which we rely on ``S-Extractor'' (Bertin & Arnouts (1996), A&ASS 117, 393), then identify all the background objects, and finally to search the non-background objects for sources moving in a coherent fashion. We will also describe the results of this method applied to actual data from the 8k CCD mosaic. {This work is being supported, in part, by NSF grant AST 92-21318.}
How Parents of Teens Store and Monitor Prescription Drugs in the Home
ERIC Educational Resources Information Center
Friese, Bettina; Moore, Roland S.; Grube, Joel W.; Jennings, Vanessa K.
2013-01-01
Qualitative interviews were conducted with parents of teens to explore how parents store and monitor prescription drugs in the home. Most parents had prescription drugs in the house, but took few precautions against teens accessing these drugs. Strategies for monitoring included moving the drugs to different locations, remembering how many pills…
Dynamic Binding of Identity and Location Information: A Serial Model of Multiple Identity Tracking
ERIC Educational Resources Information Center
Oksama, Lauri; Hyona, Jukka
2008-01-01
Tracking of multiple moving objects is commonly assumed to be carried out by a fixed-capacity parallel mechanism. The present study proposes a serial model (MOMIT) to explain performance accuracy in the maintenance of multiple moving objects with distinct identities. A serial refresh mechanism is postulated, which makes recourse to continuous…
Acoustical-Levitation Chamber for Metallurgy
NASA Technical Reports Server (NTRS)
Barmatz, M. B.; Trinh, E.; Wang, T. G.; Elleman, D. D.; Jacobi, N.
1983-01-01
Sample moved to different positions for heating and quenching. Acoustical levitation chamber selectively excited in fundamental and second-harmonic longitudinal modes to hold sample at one of three stable postions: A, B, or C. Levitated object quickly moved from one of these positions to another by changing modes. Object rapidly quenched at A or C after heating in furnace region at B.
Another Way of Tracking Moving Objects Using Short Video Clips
ERIC Educational Resources Information Center
Vera, Francisco; Romanque, Cristian
2009-01-01
Physics teachers have long employed video clips to study moving objects in their classrooms and instructional labs. A number of approaches exist, both free and commercial, for tracking the coordinates of a point using video. The main characteristics of the method described in this paper are: it is simple to use; coordinates can be tracked using…
The influence of object shape and center of mass on grasp and gaze
Desanghere, Loni; Marotta, Jonathan J.
2015-01-01
Recent experiments examining where participants look when grasping an object found that fixations favor the eventual index finger landing position on the object. Even though the act of picking up an object must involve complex high-level computations such as the visual analysis of object contours, surface properties, knowledge of an object’s function and center of mass (COM) location, these investigations have generally used simple symmetrical objects – where COM and horizontal midline overlap. Less research has been aimed at looking at how variations in object properties, such as differences in curvature and changes in COM location, affect visual and motor control. The purpose of this study was to examine grasp and fixation locations when grasping objects whose COM was positioned to the left or right of the objects horizontal midline (Experiment 1) and objects whose COM was moved progressively further from the midline of the objects based on the alteration of the object’s shape (Experiment 2). Results from Experiment 1 showed that object COM position influenced fixation locations and grasp locations differently, with fixations not as tightly linked to index finger grasp locations as was previously reported with symmetrical objects. Fixation positions were also found to be more central on the non-symmetrical objects. This difference in gaze position may provide a more holistic view, which would allow both index finger and thumb positions to be monitored while grasping. Finally, manipulations of COM distance (Experiment 2) exerted marked effects on the visual analysis of the objects when compared to its influence on grasp locations, with fixation locations more sensitive to these manipulations. Together, these findings demonstrate how object features differentially influence gaze vs. grasp positions during object interaction. PMID:26528207
Calibration of asynchronous smart phone cameras from moving objects
NASA Astrophysics Data System (ADS)
Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel
2015-04-01
Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.
Occupational injuries and sick leaves in household moving works.
Hwan Park, Myoung; Jeong, Byung Yong
2017-09-01
This study is concerned with household moving works and the characteristics of occupational injuries and sick leaves in each step of the moving process. Accident data for 392 occupational accidents were categorized by the moving processes in which the accidents occurred, and possible incidents and sick leaves were assessed for each moving process and hazard factor. Accidents occurring during specific moving processes showed different characteristics depending on the type of accident and agency of accidents. The most critical form in the level of risk management was falls from a height in the 'lifting by ladder truck' process. Incidents ranked as a 'High' level of risk management were in the forms of slips, being struck by objects and musculoskeletal disorders in the 'manual materials handling' process. Also, falls in 'loading/unloading', being struck by objects during 'lifting by ladder truck' and driving accidents in the process of 'transport' were ranked 'High'. The findings of this study can be used to develop more effective accident prevention policy reflecting different circumstances and conditions to reduce occupational accidents in household moving works.
Measuring attention using flash-lag effect.
Shioiri, Satoshi; Yamamoto, Ken; Oshida, Hiroki; Matsubara, Kazuya; Yaguchi, Hirohisa
2010-08-13
We investigated the effect of attention on the flash-lag effect (FLE) in order to determine whether the FLE can be used to estimate the effect of visual attention. The FLE is the effect that a flash aligned with a moving object is perceived to lag the moving object, and several studies have shown that attention reduces its magnitude. We measured the FLE as a function of the number or speed of moving objects. The results showed that the effect of cueing, which we attributed the effect of attention, on the FLE increased monotonically with the number or the speed of the objects. This suggests that the amount of attention can be estimated by measuring the FLE, assuming that more amount of attention is required for a larger number or faster speed of objects to attend. On the basis of this presumption, we attempted to measure the spatial spread of visual attention by FLE measurements. The estimated spatial spreads were similar to those estimated by other experimental methods.
Virtual hand: a 3D tactile interface to virtual environments
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Borrel, Paul
2008-02-01
We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.
Variability of the lowest mass objects in the AB Doradus moving group
NASA Astrophysics Data System (ADS)
Vos, Johanna M.; Allers, Katelyn N.; Biller, Beth A.; Liu, Michael C.; Dupuy, Trent J.; Gallimore, Jack F.; Adenuga, Iyadunni J.; Best, William M. J.
2018-02-01
We present the detection of [3.6 μm] photometric variability in two young, L/T transition brown dwarfs, WISE J004701.06+680352.1 (W0047) and 2MASS J2244316+204343 (2M2244) using the Spitzer Space Telescope. We find a period of 16.4 ± 0.2 h and a peak-to-peak amplitude of 1.07 ± 0.04 per cent for W0047, and a period of 11 ± 2 h and amplitude of 0.8 ± 0.2 per cent for 2M2244. This period is significantly longer than that measured previously during a shorter observation. We additionally detect significant J-band variability in 2M2244 using the Wide-Field Camera on UKIRT. We determine the radial and rotational velocities of both objects using Keck NIRSPEC data. We find a radial velocity of -16.0_{-0.9}^{+0.8} km s-1 for 2M2244, and confirm it as a bona fide member of the AB Doradus moving group. We find rotational velocities of v sin i = 9.8 ± 0.3 and 14.3^{+1.4}_{-1.5} km s-1 for W0047 and 2M2244, respectively. With inclination angles of 85°+5-9 and 76°+14-20, W0047 and 2M2244 are viewed roughly equator-on. Their remarkably similar colours, spectra and inclinations are consistent with the possibility that viewing angle may influence atmospheric appearance. We additionally present Spitzer [4.5 μm] monitoring of the young, T5.5 object SDSS111010+011613 (SDSS1110) where we detect no variability. For periods <18 h, we place an upper limit of 1.25 per cent on the peak-to-peak variability amplitude of SDSS1110.
NASA Astrophysics Data System (ADS)
Pfister, T.; Günther, P.; Nöthen, M.; Czarske, J.
2010-02-01
Both in production engineering and process control, multidirectional displacements, deformations and vibrations of moving or rotating components have to be measured dynamically, contactlessly and with high precision. Optical sensors would be predestined for this task, but their measurement rate is often fundamentally limited. Furthermore, almost all conventional sensors measure only one measurand, i.e. either out-of-plane or in-plane distance or velocity. To solve this problem, we present a novel phase coded heterodyne laser Doppler distance sensor (PH-LDDS), which is able to determine out-of-plane (axial) position and in-plane (lateral) velocity of rough solid-state objects simultaneously and independently with a single sensor. Due to the applied heterodyne technique, stationary or purely axially moving objects can also be measured. In addition, it is shown theoretically as well as experimentally that this sensor offers concurrently high temporal resolution and high position resolution since its position uncertainty is in principle independent of the lateral object velocity in contrast to conventional distance sensors. This is a unique feature of the PH-LDDS enabling precise and dynamic position and shape measurements also of fast moving objects. With an optimized sensor setup, an average position resolution of 240 nm was obtained.
Sensor modeling and demonstration of a multi-object spectrometer for performance-driven sensing
NASA Astrophysics Data System (ADS)
Kerekes, John P.; Presnar, Michael D.; Fourspring, Kenneth D.; Ninkov, Zoran; Pogorzala, David R.; Raisanen, Alan D.; Rice, Andrew C.; Vasquez, Juan R.; Patel, Jeffrey P.; MacIntyre, Robert T.; Brown, Scott D.
2009-05-01
A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a highfidelity, hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track moving vehicle targets.
GNSS real time performance monitoring and CNS/ATM implementation
DOT National Transportation Integrated Search
2006-07-01
The global transition to communications, navigation, surveillance / air traffic management (CNS/ATM) technology is moving forward at an increasing pace. A critical part of the CNS/ATM concept is the ability to monitor, analyze, and distribute aeronau...
What makes a movement a gesture?
Novack, Miriam A; Wakefield, Elizabeth M; Goldin-Meadow, Susan
2016-01-01
Theories of how adults interpret the actions of others have focused on the goals and intentions of actors engaged in object-directed actions. Recent research has challenged this assumption, and shown that movements are often interpreted as being for their own sake (Schachner & Carey, 2013). Here we postulate a third interpretation of movement-movement that represents action, but does not literally act on objects in the world. These movements are gestures. In this paper, we describe a framework for predicting when movements are likely to be seen as representations. In Study 1, adults described one of three scenes: (1) an actor moving objects, (2) an actor moving her hands in the presence of objects (but not touching them) or (3) an actor moving her hands in the absence of objects. Participants systematically described the movements as depicting an object-directed action when the actor moved objects, and favored describing the movements as depicting movement for its own sake when the actor produced the same movements in the absence of objects. However, participants favored describing the movements as representations when the actor produced the movements near, but not on, the objects. Study 2 explored two additional features-the form of an actor's hands and the presence of speech-like sounds-to test the effect of context on observers' classification of movement as representational. When movements are seen as representations, they have the power to influence communication, learning, and cognition in ways that movement for its own sake does not. By incorporating representational gesture into our framework for movement analysis, we take an important step towards developing a more cohesive understanding of action-interpretation. Copyright © 2015 Elsevier B.V. All rights reserved.
Visual and Non-Visual Contributions to the Perception of Object Motion during Self-Motion
Fajen, Brett R.; Matthis, Jonathan S.
2013-01-01
Many locomotor tasks involve interactions with moving objects. When observer (i.e., self-)motion is accompanied by object motion, the optic flow field includes a component due to self-motion and a component due to object motion. For moving observers to perceive the movement of other objects relative to the stationary environment, the visual system could recover the object-motion component – that is, it could factor out the influence of self-motion. In principle, this could be achieved using visual self-motion information, non-visual self-motion information, or a combination of both. In this study, we report evidence that visual information about the speed (Experiment 1) and direction (Experiment 2) of self-motion plays a role in recovering the object-motion component even when non-visual self-motion information is also available. However, the magnitude of the effect was less than one would expect if subjects relied entirely on visual self-motion information. Taken together with previous studies, we conclude that when self-motion is real and actively generated, both visual and non-visual self-motion information contribute to the perception of object motion. We also consider the possible role of this process in visually guided interception and avoidance of moving objects. PMID:23408983
NASA Astrophysics Data System (ADS)
Han, Yongquan
2015-03-01
To study on vacuum force, we must clear what is vacuum, vacuum is a space do not have any air and also ray. There is not exist an absolute the vacuum of space. The vacuum of space is relative, so that the vacuum force is relative. There is a certain that vacuum vacuum space exists. In fact, the vacuum space is relative, if the two spaces compared to the existence of relative vacuum, there must exist a vacuum force, and the direction of the vacuum force point to the vacuum region. Any object rotates and radiates. Rotate bend radiate- centripetal, gravity produced, relative gravity; non gravity is the vacuum force. Gravity is centripetal, is a trend that the objects who attracted wants to Centripetal, or have been do Centripetal movement. Any object moves, so gravity makes the object curve movement, that is to say, the radiation range curve movement must be in the gravitational objects, gravity must be existed in non vacuum region, and make the object who is in the region of do curve movement (for example: The earth moves around the sun), or final attracted in the form gravitational objects, and keep relatively static with attract object. (for example: objects on the earth moves but can't reach the first cosmic speed).
Feghali, Rosario; Mitiche, Amar
2004-11-01
The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.
Coordinated control of micro-grid based on distributed moving horizon control.
Ma, Miaomiao; Shao, Liyang; Liu, Xiangjie
2018-05-01
This paper proposed the distributed moving horizon coordinated control scheme for the power balance and economic dispatch problems of micro-grid based on distributed generation. We design the power coordinated controller for each subsystem via moving horizon control by minimizing a suitable objective function. The objective function of distributed moving horizon coordinated controller is chosen based on the principle that wind power subsystem has the priority to generate electricity while photovoltaic power generation coordinates with wind power subsystem and the battery is only activated to meet the load demand when necessary. The simulation results illustrate that the proposed distributed moving horizon coordinated controller can allocate the output power of two generation subsystems reasonably under varying environment conditions, which not only can satisfy the load demand but also limit excessive fluctuations of output power to protect the power generation equipment. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Fan filters, the 3-D Radon transform, and image sequence analysis.
Marzetta, T L
1994-01-01
This paper develops a theory for the application of fan filters to moving objects. In contrast to previous treatments of the subject based on the 3-D Fourier transform, simplicity and insight are achieved by using the 3-D Radon transform. With this point of view, the Radon transform decomposes the image sequence into a set of plane waves that are parameterized by a two-component slowness vector. Fan filtering is equivalent to a multiplication in the Radon transform domain by a slowness response function, followed by an inverse Radon transform. The plane wave representation of a moving object involves only a restricted set of slownesses such that the inner product of the plane wave slowness vector and the moving object velocity vector is equal to one. All of the complexity in the application of fan filters to image sequences results from the velocity-slowness mapping not being one-to-one; therefore, the filter response cannot be independently specified at all velocities. A key contribution of this paper is to elucidate both the power and the limitations of fan filtering in this new application. A potential application of 3-D fan filters is in the detection of moving targets in clutter and noise. For example, an appropriately designed fan filter can reject perfectly all moving objects whose speed, irrespective of heading, is less than a specified cut-off speed, with only minor attenuation of significantly faster objects. A simple geometric construction determines the response of the filter for speeds greater than the cut-off speed.
Illusory object motion in the centre of a radial pattern: The Pursuit-Pursuing illusion.
Ito, Hiroyuki
2012-01-01
A circular object placed in the centre of a radial pattern consisting of thin sectors was found to cause a robust motion illusion. During eye-movement pursuit of a moving target, the presently described stimulus produced illusory background-object motion in the same direction as that of the eye movement. In addition, the display induced illusory stationary perception of a moving object against the whole display motion. In seven experiments, the characteristics of the illusion were examined in terms of luminance relationships and figural characteristics of the radial pattern. Some potential explanations for these findings are discussed.
Interpretation of the function of the striate cortex
NASA Astrophysics Data System (ADS)
Garner, Bernardette M.; Paplinski, Andrew P.
2000-04-01
Biological neural networks do not require retraining every time objects move in the visual field. Conventional computer neural networks do not share this shift-invariance. The brain compensates for movements in the head, body, eyes and objects by allowing the sensory data to be tracked across the visual field. The neurons in the striate cortex respond to objects moving across the field of vision as is seen in many experiments. It is proposed, that the neurons in the striate cortex allow continuous angle changes needed to compensate for changes in orientation of the head, eyes and the motion of objects in the field of vision. It is hypothesized that the neurons in the striate cortex form a system that allows for the translation, some rotation and scaling of objects and provides a continuity of objects as they move relative to other objects. The neurons in the striate cortex respond to features which are fundamental to sight, such as orientation of lines, direction of motion, color and contrast. The neurons that respond to these features are arranged on the cortex in a way that depends on the features they are responding to and on the area of the retina from which they receive their inputs.
Exhausting Attentional Tracking Resources with a Single Fast-Moving Object
ERIC Educational Resources Information Center
Holcombe, Alex O.; Chen, Wei-Ying
2012-01-01
Driving on a busy road, eluding a group of predators, or playing a team sport involves keeping track of multiple moving objects. In typical laboratory tasks, the number of visual targets that humans can track is about four. Three types of theories have been advanced to explain this limit. The fixed-limit theory posits a set number of attentional…
Algorithms for detection of objects in image sequences captured from an airborne imaging system
NASA Technical Reports Server (NTRS)
Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak
1995-01-01
This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.
Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects.
Kang, Ziho; Mandal, Saptarshi; Crutchfield, Jerry; Millan, Angel; McClung, Sarah N
2016-01-01
Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance.
Designs and Algorithms to Map Eye Tracking Data with Dynamic Multielement Moving Objects
Mandal, Saptarshi
2016-01-01
Design concepts and algorithms were developed to address the eye tracking analysis issues that arise when (1) participants interrogate dynamic multielement objects that can overlap on the display and (2) visual angle error of the eye trackers is incapable of providing exact eye fixation coordinates. These issues were addressed by (1) developing dynamic areas of interests (AOIs) in the form of either convex or rectangular shapes to represent the moving and shape-changing multielement objects, (2) introducing the concept of AOI gap tolerance (AGT) that controls the size of the AOIs to address the overlapping and visual angle error issues, and (3) finding a near optimal AGT value. The approach was tested in the context of air traffic control (ATC) operations where air traffic controller specialists (ATCSs) interrogated multiple moving aircraft on a radar display to detect and control the aircraft for the purpose of maintaining safe and expeditious air transportation. In addition, we show how eye tracking analysis results can differ based on how we define dynamic AOIs to determine eye fixations on moving objects. The results serve as a framework to more accurately analyze eye tracking data and to better support the analysis of human performance. PMID:27725830
Suemitsu, Yoshikazu; Nara, Shigetoshi
2004-09-01
Chaotic dynamics introduced into a neural network model is applied to solving two-dimensional mazes, which are ill-posed problems. A moving object moves from the position at t to t + 1 by simply defined motion function calculated from firing patterns of the neural network model at each time step t. We have embedded several prototype attractors that correspond to the simple motion of the object orienting toward several directions in two-dimensional space in our neural network model. Introducing chaotic dynamics into the network gives outputs sampled from intermediate state points between embedded attractors in a state space, and these dynamics enable the object to move in various directions. System parameter switching between a chaotic and an attractor regime in the state space of the neural network enables the object to move to a set target in a two-dimensional maze. Results of computer simulations show that the success rate for this method over 300 trials is higher than that of random walk. To investigate why the proposed method gives better performance, we calculate and discuss statistical data with respect to dynamical structure.
ERIC Educational Resources Information Center
Garnett, Bernice R.; Becker, Kelly; Vierling, Danielle; Gleason, Cara; DiCenzo, Danielle; Mongeon, Louise
2017-01-01
Objective: Less than half of young people in the USA are meeting the daily physical activity requirements of at least 60 minutes of moderate or vigorous physical activity. A mixed-methods pilot feasibility assessment of "Move it Move it!" was conducted in the Spring of 2014 to assess the impact of a before-school physical activity…
NASA Astrophysics Data System (ADS)
Goldstein, N.; Dressler, R. A.; Richtsmeier, S. S.; McLean, J.; Dao, P. D.; Murray-Krezan, J.; Fulcoly, D. O.
2013-09-01
Recent ground testing of a wide area camera system and automated star removal algorithms has demonstrated the potential to detect, quantify, and track deep space objects using small aperture cameras and on-board processors. The camera system, which was originally developed for a space-based Wide Area Space Surveillance System (WASSS), operates in a fixed-stare mode, continuously monitoring a wide swath of space and differentiating celestial objects from satellites based on differential motion across the field of view. It would have greatest utility in a LEO orbit to provide automated and continuous monitoring of deep space with high refresh rates, and with particular emphasis on the GEO belt and GEO transfer space. Continuous monitoring allows a concept of change detection and custody maintenance not possible with existing sensors. The detection approach is equally applicable to Earth-based sensor systems. A distributed system of such sensors, either Earth-based, or space-based, could provide automated, persistent night-time monitoring of all of deep space. The continuous monitoring provides a daily record of the light curves of all GEO objects above a certain brightness within the field of view. The daily updates of satellite light curves offers a means to identify specific satellites, to note changes in orientation and operational mode, and to queue other SSA assets for higher resolution queries. The data processing approach may also be applied to larger-aperture, higher resolution camera systems to extend the sensitivity towards dimmer objects. In order to demonstrate the utility of the WASSS system and data processing, a ground based field test was conducted in October 2012. We report here the results of the observations made at Magdalena Ridge Observatory using the prototype WASSS camera, which has a 4×60° field-of-view , <0.05° resolution, a 2.8 cm2 aperture, and the ability to view within 4° of the sun. A single camera pointed at the GEO belt provided a continuous night-long record of the intensity and location of more than 50 GEO objects detected within the camera's 60-degree field-of-view, with a detection sensitivity similar to the camera's shot noise limit of Mv=13.7. Performance is anticipated to scale with aperture area, allowing the detection of dimmer objects with larger-aperture cameras. The sensitivity of the system depends on multi-frame averaging and an image processing algorithm that exploits the different angular velocities of celestial objects and SOs. Principal Components Analysis (PCA) is used to filter out all objects moving with the velocity of the celestial frame of reference. The resulting filtered images are projected back into an Earth-centered frame of reference, or into any other relevant frame of reference, and co-added to form a series of images of the GEO objects as a function of time. The PCA approach not only removes the celestial background, but it also removes systematic variations in system calibration, sensor pointing, and atmospheric conditions. The resulting images are shot-noise limited, and can be exploited to automatically identify deep space objects, produce approximate state vectors, and track their locations and intensities as a function of time.
MoveU? Assessing a Social Marketing Campaign to Promote Physical Activity
ERIC Educational Resources Information Center
Scarapicchia, Tanya M. F.; Sabiston, Catherine M. F.; Brownrigg, Michelle; Blackburn-Evans, Althea; Cressy, Jill; Robb, Janine; Faulkner, Guy E. J.
2015-01-01
Objective: MoveU is a social marketing initiative aimed at increasing moderate-to-vigorous physical activity (MVPA) among undergraduate students. Using the Hierarchy of Effects model (HOEM), this study identified awareness of MoveU and examined associations between awareness, outcome expectations, self-efficacy, intentions, and MVPA. Participants:…
Psychovisual masks and intelligent streaming RTP techniques for the MPEG-4 standard
NASA Astrophysics Data System (ADS)
Mecocci, Alessandro; Falconi, Francesco
2003-06-01
In today multimedia audio-video communication systems, data compression plays a fundamental role by reducing the bandwidth waste and the costs of the infrastructures and equipments. Among the different compression standards, the MPEG-4 is becoming more and more accepted and widespread. Even if one of the fundamental aspects of this standard is the possibility of separately coding video objects (i.e. to separate moving objects from the background and adapt the coding strategy to the video content), currently implemented codecs work only at the full-frame level. In this way, many advantages of the flexible MPEG-4 syntax are missed. This lack is due both to the difficulties in properly segmenting moving objects in real scenes (featuring an arbitrary motion of the objects and of the acquisition sensor), and to the current use of these codecs, that are mainly oriented towards the market of DVD backups (a full-frame approach is enough for these applications). In this paper we propose a codec for MPEG-4 real-time object streaming, that codes separately the moving objects and the scene background. The proposed codec is capable of adapting its strategy during the transmission, by analysing the video currently transmitted and setting the coder parameters and modalities accordingly. For example, the background can be transmitted as a whole or by dividing it into "slightly-detailed" and "highly detailed" zones that are coded in different ways to reduce the bit-rate while preserving the perceived quality. The coder can automatically switch in real-time, from one modality to the other during the transmission, depending on the current video content. Psychovisual masks and other video-content based measurements have been used as inputs for a Self Learning Intelligent Controller (SLIC) that changes the parameters and the transmission modalities. The current implementation is based on the ISO 14496 standard code that allows Video Objects (VO) transmission (other Open Source Codes like: DivX, Xvid, and Cisco"s Mpeg-4IP, have been analyzed but, as for today, they do not support VO). The original code has been deeply modified to integrate the SLIC and to adapt it for real-time streaming. A personal RTP (Real Time Protocol) has been defined and a Client-Server application has been developed. The viewer can decode and demultiplex the stream in real-time, while adapting to the changing modalities adopted by the Server according to the current video content. The proposed codec works as follows: the image background is separated by means of a segmentation module and it is transmitted by means of a wavelet compression scheme similar to that used in the JPEG2000. The VO are coded separately and multiplexed with the background stream. At the receiver the stream is demultiplexed to obtain the background and the VO that are subsequently pasted together. The final quality depends on many factors, in particular: the quantization parameters, the Group Of Video Object (GOV) length, the GOV structure (i.e. the number of I-P-B VOP), the search area for motion compensation. These factors are strongly related to the following measurement parameters (that have been defined during the development): the Objects Apparent Size (OAS) in the scene, the Video Object Incidence factor (VOI), the temporal correlation (measured through the Normalized Mean SAD, NMSAD). The SLIC module analyzes the currently transmitted video and selects the most appropriate settings by choosing from a predefined set of transmission modalities. For example, in the case of a highly temporal correlated sequence, the number of B-VOP is increased to improve the compression ratio. The strategy for the selection of the number of B-VOP turns out to be very different from those reported in the literature for B-frames (adopted for MPEG-1 and MPEG-2), due to the different behaviour of the temporal correlation when limited only to moving objects. The SLIC module also decides how to transmit the background. In our implementation we adopted the Visual Brain theory i.e. the study of what the "psychic eye" can get from a scene. According to this theory, a Psychomask Image Analysis (PIA) module has been developed to extract the visually homogeneous regions of the background. The PIA module produces two complementary masks one for the visually low variance zones and one for the higly variable zones; these zones are compressed with different strategies and encoded into two multiplexed streams. From practical experiments it turned out that the separate coding is advantageous only if the low variance zones exceed 50% of the whole background area (due to the overhead given by the need of transmitting the zone masks). The SLIC module takes care of deciding the appropriate transmission modality by analyzing the results produced by the PIA module. The main features of this codec are: low bitrate, good image quality and coding speed. The current implementation runs in real-time on standard PC platforms, the major limitation being the fixed position of the acquisition sensor. This limitation is due to the difficulties in separating moving objects from the background when the acquisition sensor moves. Our current real-time segmentation module does not produce suitable results if the acquisition sensor moves (only slight oscillatory movements are tolerated). In any case, the system is particularly suitable for tele surveillance applications at low bit-rates, where the camera is usually fixed or alternates among some predetermined positions (our segmentation module is capable of accurately separate moving objects from the static background when the acquisition sensor stops, even if different scenes are seen as a result of the sensor displacements). Moreover, the proposed architecture is general, in the sense that when real-time, robust segmentation systems (capable of separating objects in real-time from the background while the sensor itself is moving) will be available, they can be easily integrated while leaving the rest of the system unchanged. Experimental results related to real sequences for traffic monitoring and for people tracking and afety control are reported and deeply discussed in the paper. The whole system has been implemented in standard ANSI C code and currently runs on standard PCs under Microsoft Windows operating system (Windows 2000 pro and Windows XP).
Method and System for Object Recognition Search
NASA Technical Reports Server (NTRS)
Duong, Tuan A. (Inventor); Duong, Vu A. (Inventor); Stubberud, Allen R. (Inventor)
2012-01-01
A method for object recognition using shape and color features of the object to be recognized. An adaptive architecture is used to recognize and adapt the shape and color features for moving objects to enable object recognition.
Motion Alters Color Appearance
Hong, Sang-Wook; Kang, Min-Suk
2016-01-01
Chromatic induction compellingly demonstrates that chromatic context as well as spectral lights reflected from an object determines its color appearance. Here, we show that when one colored object moves around an identical stationary object, the perceived saturation of the stationary object decreases dramatically whereas the saturation of the moving object increases. These color appearance shifts in the opposite directions suggest that normalization induced by the object’s motion may mediate the shift in color appearance. We ruled out other plausible alternatives such as local adaptation, attention, and transient neural responses that could explain the color shift without assuming interaction between color and motion processing. These results demonstrate that the motion of an object affects both its own color appearance and the color appearance of a nearby object, suggesting a tight coupling between color and motion processing. PMID:27824098
Direct imaging and new technologies to search for substellar companions around MGs cool dwarfs
NASA Astrophysics Data System (ADS)
Gálvez-Ortiz, M. C.; Clarke, J. R. A.; Pinfield, D. J.; Folkes, S. L.; Jenkins, J. S.; García Pérez, A. E.; Burningham, B.; Day-Jones, A. C.; Jones, H. R. A.
2011-07-01
We describe here our project based in a search for sub-stellar companions (brown dwarfs and exo-planets) around young ultra-cool dwarfs (UCDs) and characterise their properties. We will use current and future technology (high contrast imaging, high-precision Doppler determinations) from the ground and space (VLT, ELT and JWST), to find companions to young objects. Members of young moving groups (MGs) have clear advantages in this field. We compiled a catalogue of young UCD objects and studied their membership to five known young moving groups: Local Association (Pleiades moving group, 20-150 Myr), Ursa Mayor group (Sirius supercluster, 300 Myr), Hyades supercluster (600 Myr), IC 2391 supercluster (35 Myr) and Castor moving group (200 Myr). To assess them as members we used different kinematic and spectroscopic criteria.
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
Schwartze, Jonas; Prekazi, Arianit; Schrom, Harald; Marschollek, Michael
2017-01-01
Ambient assisted living (AAL) may support ageing in place but is primarily driven by technology. The aim of this work is, to identifying reasons to move into assisted living institutions, their range of service and possible substitutability. We did semi-structured interviews with five experts from assisted living institutions and used results to design and implement assistive technologies in an AAL environment using BASIS, a cross domain bus system for smart buildings. Reasons for moving to assisted living institutions are expected benefits for chronic health problems, safety, social isolation and carefree living. We implemented six application systems for inactivity monitoring, stove shutdown, air quality monitoring, medication and appointment reminders, detection of unwanted situations before leaving and optical ringing of the doorbell. Substitution of selected assisted living services is feasible and has potential to delay necessity to move into assisted living institution if complement social services are installed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winnek, D.F.
A method and apparatus for making X-ray photographs which can be viewed in three dimensions with the use of a lenticular screen. The apparatus includes a linear tomograph having a moving X-ray source on one side of a support on which an object is to be placed so that X-rays can pass through the object to the opposite side of the support. A movable cassette on the opposite side of the support moves in a direction opposite to the direction of travel of the X-ray source as the source moves relative to the support. The cassette has an intensifying screen,more » a grating mask provided with uniformly spaced slots for passing X-rays, a lenticular member adjacent to the mask, and a photographic emulsion adjacent to the opposite side of the lenticular member. The cassette has a power device for moving the lenticular member and the emulsion relative to the mask a distance equal to the spacing between a pair of adjacent slots in the mask. The X-rays from the source, after passing through an object on the support, pass into the cassette through the slots of the mask and are focused on the photographic emulsion to result in a continuum of X-ray views of the object. When the emulsion is developed and viewed through the lenticular member, the object can be seen in three dimensions.« less
ERIC Educational Resources Information Center
Pumfrey, Peter
2008-01-01
Is the currently selective UK higher education (HE) system becoming more inclusive? Between 1998/99 and 2004/05, in relation to talented students with disabilities, has the UK government's HE policy implementation moved HE towards achieving two of the government's key HE objectives for 2010? These objectives are: (a) increasing HE participation…
Image registration of naval IR images
NASA Astrophysics Data System (ADS)
Rodland, Arne J.
1996-06-01
In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.
Command Wire Sensor Measurements
2012-09-01
coupled with the extreme harsh terrain has meant that few of these techniques have proved robust enough when moved from the laboratory to the field...to image stationary objects and does not accurately image moving targets. Moving targets can be seriously distorted and displaced from their true...battlefield and for imaging of fixed targets. Moving targets can be detected with a SAR if they have a Doppler frequency shift greater than the
Using articulated scene models for dynamic 3d scene analysis in vista spaces
NASA Astrophysics Data System (ADS)
Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven
2010-09-01
In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.
Portable Test And Monitoring System For Wind-Tunnel Models
NASA Technical Reports Server (NTRS)
Poupard, Charles A.
1987-01-01
Portable system developed to test and monitor instrumentation used in wind-tunnel models. Self-contained and moves easily to model, either before or after model installed in wind tunnel. System is 44 1/2 in. high, 22 in. wide, and 17 in. deep and weighs 100 lb. Primary benefits realized with portable test and monitoring system associated with saving of time.
Congruity Effects in Time and Space: Behavioral and ERP Measures
ERIC Educational Resources Information Center
Teuscher, Ursina; McQuire, Marguerite; Collins, Jennifer; Coulson, Seana
2008-01-01
Two experiments investigated whether motion metaphors for time affected the perception of spatial motion. Participants read sentences either about literal motion through space or metaphorical motion through time written from either the ego-moving or object-moving perspective. Each sentence was followed by a cartoon clip. Smiley-moving clips showed…
Context-aware pattern discovery for moving object trajectories
NASA Astrophysics Data System (ADS)
Sharif, Mohammad; Asghar Alesheikh, Ali; Kaffash Charandabi, Neda
2018-05-01
Movement of point objects are highly sensitive to the underlying situations and conditions during the movement, which are known as contexts. Analyzing movement patterns, while accounting the contextual information, helps to better understand how point objects behave in various contexts and how contexts affect their trajectories. One potential solution for discovering moving objects patterns is analyzing the similarities of their trajectories. This article, therefore, contextualizes the similarity measure of trajectories by not only their spatial footprints but also a notion of internal and external contexts. The dynamic time warping (DTW) method is employed to assess the multi-dimensional similarities of trajectories. Then, the results of similarity searches are utilized in discovering the relative movement patterns of the moving point objects. Several experiments are conducted on real datasets that were obtained from commercial airplanes and the weather information during the flights. The results yielded the robustness of DTW method in quantifying the commonalities of trajectories and discovering movement patterns with 80 % accuracy. Moreover, the results revealed the importance of exploiting contextual information because it can enhance and restrict movements.
Tracking moving targets behind a scattering medium via speckle correlation.
Guo, Chengfei; Liu, Jietao; Wu, Tengfei; Zhu, Lei; Shao, Xiaopeng
2018-02-01
Tracking moving targets behind a scattering medium is a challenge, and it has many important applications in various fields. Owing to the multiple scattering, instead of the object image, only a random speckle pattern can be received on the camera when light is passing through highly scattering layers. Significantly, an important feature of a speckle pattern has been found, and it showed the target information can be derived from the speckle correlation. In this work, inspired by the notions used in computer vision and deformation detection, by specific simulations and experiments, we demonstrate a simple object tracking method, in which by using the speckle correlation, the movement of a hidden object can be tracked in the lateral direction and axial direction. In addition, the rotation state of the moving target can also be recognized by utilizing the autocorrelation of a speckle. This work will be beneficial for biomedical applications in the fields of quantitative analysis of the working mechanisms of a micro-object and the acquisition of dynamical information of the micro-object motion.
Störmer, Viola S; Winther, Gesche N; Li, Shu-Chen; Andersen, Søren K
2013-03-20
Keeping track of multiple moving objects is an essential ability of visual perception. However, the mechanisms underlying this ability are not well understood. We instructed human observers to track five or seven independent randomly moving target objects amid identical nontargets and recorded steady-state visual evoked potentials (SSVEPs) elicited by these stimuli. Visual processing of moving targets, as assessed by SSVEP amplitudes, was continuously facilitated relative to the processing of identical but irrelevant nontargets. The cortical sources of this enhancement were located to areas including early visual cortex V1-V3 and motion-sensitive area MT, suggesting that the sustained multifocal attentional enhancement during multiple object tracking already operates at hierarchically early stages of visual processing. Consistent with this interpretation, the magnitude of attentional facilitation during tracking in a single trial predicted the speed of target identification at the end of the trial. Together, these findings demonstrate that attention can flexibly and dynamically facilitate the processing of multiple independent object locations in early visual areas and thereby allow for tracking of these objects.
Bangert, M; Gil, H; Oliva, J; Delgado, C; Vega, T; DE Mateo, S; Larrauri, A
2017-03-01
The intensity of annual Spanish influenza activity is currently estimated from historical data of the Spanish Influenza Sentinel Surveillance System (SISSS) using qualitative indicators from the European Influenza Surveillance Network. However, these indicators are subjective, based on qualitative comparison with historical data of influenza-like illness rates. This pilot study assesses the implementation of Moving Epidemic Method (MEM) intensity levels during the 2014-2015 influenza season within the 17 sentinel networks covered by SISSS, comparing them to historically reported indicators. Intensity levels reported and those obtained with MEM at the epidemic peak of the influenza wave, and at national and regional levels did not show statistical difference (P = 0·74, Wilcoxon signed-rank test), suggesting that the implementation of MEM would have limited disrupting effects on the dynamic of notification within the surveillance system. MEM allows objective influenza surveillance monitoring and standardization of criteria for comparing the intensity of influenza epidemics in regions in Spain. Following this pilot study, MEM has been adopted to harmonize the reporting of intensity levels of influenza activity in Spain, starting in the 2015-2016 season.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dmitriev, A K; Konovalov, A N; Ul'yanov, V A
2014-04-28
We report an experimental study of the self-mixing effect in a single-mode multifrequency erbium fibre laser when radiation backscattered from an external moving object arrives at its cavity. To eliminate resulting chaotic pulsations in the laser, we have proposed a technique for suppressing backscattered radiation through the use of multimode fibre for radiation delivery. The multifrequency operation of the laser has been shown to lead to strong fluctuations of the amplitude of the Doppler signal and a nonmonotonic variation of the amplitude with distance to the scattering object. In spite of these features, the self-mixing signal was detected with amore » high signal-to-noise ratio (above 10{sup 2}) when the radiation was scattered by a rotating disc, and the Doppler frequency shift, evaluated as the centroid of its spectrum, had high stability (0.15%) and linearity relative to the rotation rate. We conclude that the self-mixing effect in this type of fibre laser can be used for measuring the velocity of scattering objects and in Doppler spectroscopy for monitoring the laser evaporation of materials and biological tissues. (control of laser radiation parameters)« less
A monitoring tool for performance improvement in plastic surgery at the individual level.
Maruthappu, Mahiben; Duclos, Antoine; Orgill, Dennis; Carty, Matthew J
2013-05-01
The assessment of performance in surgery is expanding significantly. Application of relevant frameworks to plastic surgery, however, has been limited. In this article, the authors present two robust graphic tools commonly used in other industries that may serve to monitor individual surgeon operative time while factoring in patient- and surgeon-specific elements. The authors reviewed performance data from all bilateral reduction mammaplasties performed at their institution by eight surgeons between 1995 and 2010. Operative time was used as a proxy for performance. Cumulative sum charts and exponentially weighted moving average charts were generated using a train-test analytic approach, and used to monitor surgical performance. Charts mapped crude, patient case-mix-adjusted, and case-mix and surgical-experience-adjusted performance. Operative time was found to decline from 182 minutes to 118 minutes with surgical experience (p < 0.001). Cumulative sum and exponentially weighted moving average charts were generated using 1995 to 2007 data (1053 procedures) and tested on 2008 to 2010 data (246 procedures). The sensitivity and accuracy of these charts were significantly improved by adjustment for case mix and surgeon experience. The consideration of patient- and surgeon-specific factors is essential for correct interpretation of performance in plastic surgery at the individual surgeon level. Cumulative sum and exponentially weighted moving average charts represent accurate methods of monitoring operative time to control and potentially improve surgeon performance over the course of a career.
NASA Astrophysics Data System (ADS)
Gohatre, Umakant Bhaskar; Patil, Venkat P.
2018-04-01
In computer vision application, the multiple object detection and tracking, in real-time operation is one of the important research field, that have gained a lot of attentions, in last few years for finding non stationary entities in the field of image sequence. The detection of object is advance towards following the moving object in video and then representation of object is step to track. The multiple object recognition proof is one of the testing assignment from detection multiple objects from video sequence. The picture enrollment has been for quite some time utilized as a reason for the location the detection of moving multiple objects. The technique of registration to discover correspondence between back to back casing sets in view of picture appearance under inflexible and relative change. The picture enrollment is not appropriate to deal with event occasion that can be result in potential missed objects. In this paper, for address such problems, designs propose novel approach. The divided video outlines utilizing area adjancy diagram of visual appearance and geometric properties. Then it performed between graph sequences by using multi graph matching, then getting matching region labeling by a proposed graph coloring algorithms which assign foreground label to respective region. The plan design is robust to unknown transformation with significant improvement in overall existing work which is related to moving multiple objects detection in real time parameters.
Illusory object motion in the centre of a radial pattern: The Pursuit–Pursuing illusion
Ito, Hiroyuki
2012-01-01
A circular object placed in the centre of a radial pattern consisting of thin sectors was found to cause a robust motion illusion. During eye-movement pursuit of a moving target, the presently described stimulus produced illusory background-object motion in the same direction as that of the eye movement. In addition, the display induced illusory stationary perception of a moving object against the whole display motion. In seven experiments, the characteristics of the illusion were examined in terms of luminance relationships and figural characteristics of the radial pattern. Some potential explanations for these findings are discussed. PMID:23145267
Heterogeneous CPU-GPU moving targets detection for UAV video
NASA Astrophysics Data System (ADS)
Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan
2017-07-01
Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.
The feasibility of using Microsoft Kinect v2 sensors during radiotherapy delivery.
Edmunds, David M; Bashforth, Sophie E; Tahavori, Fatemeh; Wells, Kevin; Donovan, Ellen M
2016-11-08
Consumer-grade distance sensors, such as the Microsoft Kinect devices (v1 and v2), have been investigated for use as marker-free motion monitoring systems for radiotherapy. The radiotherapy delivery environment is challenging for such sen-sors because of the proximity to electromagnetic interference (EMI) from the pulse forming network which fires the magnetron and electron gun of a linear accelerator (linac) during radiation delivery, as well as the requirement to operate them from the control area. This work investigated whether using Kinect v2 sensors as motion monitors was feasible during radiation delivery. Three sensors were used each with a 12 m USB 3.0 active cable which replaced the supplied 3 m USB 3.0 cable. Distance output data from the Kinect v2 sensors was recorded under four condi-tions of linac operation: (i) powered up only, (ii) pulse forming network operating with no radiation, (iii) pulse repetition frequency varied between 6 Hz and 400 Hz, (iv) dose rate varied between 50 and 1450 monitor units (MU) per minute. A solid water block was used as an object and imaged when static, moved in a set of steps from 0.6 m to 2.0 m from the sensor and moving dynamically in two sinusoidal-like trajectories. Few additional image artifacts were observed and there was no impact on the tracking of the motion patterns (root mean squared accuracy of 1.4 and 1.1mm, respectively). The sensors' distance accuracy varied by 2.0 to 3.8 mm (1.2 to 1.4 mm post distance calibration) across the range measured; the precision was 1 mm. There was minimal effect from the EMI on the distance calibration data: 0 mm or 1 mm reported distance change (2 mm maximum change at one position). Kinect v2 sensors operated with 12 m USB 3.0 active cables appear robust to the radiotherapy treatment environment. © 2016 The Authors.
Plug-and-play web-based visualization of mobile air monitoring data
The collection of air measurements in real-time on moving platforms, such as wearable, bicycle-mounted, or vehicle-mounted air sensors, is becoming an increasingly common method to investigate local air quality. However, visualizing and analyzing geospatial air monitoring data r...
EPA's mobile monitoring of source emissions and near-source impact
Real-time ambient monitoring onboard a moving vehicle is a unique data collection approach applied to characterize large-area sources, such as major roadways, and detect fugitive emissions from distributed sources, such as leaking oil wells. EPA's Office of Research and Developme...
Speed skills: measuring the visual speed analyzing properties of primate MT neurons.
Perrone, J A; Thiele, A
2001-05-01
Knowing the direction and speed of moving objects is often critical for survival. However, it is poorly understood how cortical neurons process the speed of image movement. Here we tested MT neurons using moving sine-wave gratings of different spatial and temporal frequencies, and mapped out the neurons' spatiotemporal frequency response profiles. The maps typically had oriented ridges of peak sensitivity as expected for speed-tuned neurons. The preferred speed estimate, derived from the orientation of the maps, corresponded well to the preferred speed when moving bars were presented. Thus, our data demonstrate that MT neurons are truly sensitive to the object speed. These findings indicate that MT is not only a key structure in the analysis of direction of motion and depth perception, but also in the analysis of object speed.
Motion streaks do not influence the perceived position of stationary flashed objects.
Pavan, Andrea; Bellacosa Marotti, Rosilari
2012-01-01
In the present study, we investigated whether motion streaks, produced by fast moving dots Geisler 1999, distort the positional map of stationary flashed objects producing the well-known motion-induced position shift illusion (MIPS). The illusion relies on motion-processing mechanisms that induce local distortions in the positional map of the stimulus which is derived by shape-processing mechanisms. To measure the MIPS, two horizontally offset Gaussian blobs, placed above and below a central fixation point, were flashed over two fields of dots moving in opposite directions. Subjects judged the position of the top Gaussian blob relative to the bottom one. The results showed that neither fast (motion streaks) nor slow moving dots influenced the perceived spatial position of the stationary flashed objects, suggesting that background motion does not interact with the shape-processing mechanisms involved in MIPS.
NASA Astrophysics Data System (ADS)
Di, Si; Lin, Hui; Du, Ruxu
2011-05-01
Displacement measurement of moving objects is one of the most important issues in the field of computer vision. This paper introduces a new binocular vision system (BVS) based on micro-electro-mechanical system (MEMS) technology. The eyes of the system are two microlenses fabricated on a substrate by MEMS technology. The imaging results of two microlenses are collected by one complementary metal-oxide-semiconductor (CMOS) array. An algorithm is developed for computing the displacement. Experimental results show that as long as the object is moving in two-dimensional (2D) space, the system can effectively estimate the 2D displacement without camera calibration. It is also shown that the average error of the displacement measurement is about 3.5% at different object distances ranging from 10 cm to 35 cm. Because of its low cost, small size and simple setting, this new method is particularly suitable for 2D displacement measurement applications such as vision-based electronics assembly and biomedical cell culture.
An Open Source Low-Cost Wireless Control System for a Forced Circulation Solar Plant
Salamone, Francesco; Belussi, Lorenzo; Danza, Ludovico; Ghellere, Matteo; Meroni, Italo
2015-01-01
The article describes the design phase, development and practical application of a low-cost control system for a forced circulation solar plant in an outdoor test cell located near Milan. Such a system provides for the use of an electric pump for the circulation of heat transfer fluid connecting the solar thermal panel to the storage tank. The running plant temperatures are the fundamental parameter to evaluate the system performance such as proper operation, and the control and management system has to consider these parameters. A solar energy-powered wireless-based smart object was developed, able to monitor the running temperatures of a solar thermal system and aimed at moving beyond standard monitoring approaches to achieve a low-cost and customizable device, even in terms of installation in different environmental conditions. To this end, two types of communications were used: the first is a low-cost communication based on the ZigBee protocol used for control purposes, so that it can be customized according to specific needs, while the second is based on a Bluetooth protocol used for data display. PMID:26556356
An Open Source Low-Cost Wireless Control System for a Forced Circulation Solar Plant.
Salamone, Francesco; Belussi, Lorenzo; Danza, Ludovico; Ghellere, Matteo; Meroni, Italo
2015-11-05
The article describes the design phase, development and practical application of a low-cost control system for a forced circulation solar plant in an outdoor test cell located near Milan. Such a system provides for the use of an electric pump for the circulation of heat transfer fluid connecting the solar thermal panel to the storage tank. The running plant temperatures are the fundamental parameter to evaluate the system performance such as proper operation, and the control and management system has to consider these parameters. A solar energy-powered wireless-based smart object was developed, able to monitor the running temperatures of a solar thermal system and aimed at moving beyond standard monitoring approaches to achieve a low-cost and customizable device, even in terms of installation in different environmental conditions. To this end, two types of communications were used: the first is a low-cost communication based on the ZigBee protocol used for control purposes, so that it can be customized according to specific needs, while the second is based on a Bluetooth protocol used for data display.
FROM THE HISTORY OF PHYSICS: Georgii L'vovich Shnirman: designer of fast-response instruments
NASA Astrophysics Data System (ADS)
Bashilov, I. P.
1994-07-01
A biography is given of the outstanding Russian scientist Georgii L'vovich Shnirman, whose scientific life had been 'top secret'. He was an experimental physicist and instrument designer, the founder of many branches of the Soviet instrument-making industry, the originator of a theory of electric methods of integration and differentiation, a theory of astasisation of pendulums, and also of original measurement methods. He was the originator and designer of automatic systems for the control of the measuring apparatus used at nuclear test sites and of automatic seismic station systems employed in monitoring nuclear tests. He also designed the first loop oscilloscopes in the Soviet Union, high-speed photographic and cine cameras (streak cameras, etc.), and many other unique instruments, including some mounted on moving objects.
2005-12-16
KENNEDY SPACE CENTER, FLA. - In the Payload Hazardous Servicing Facility, technicians monitor New Horizons as it is lowered onto a transporter for its move to Complex 41 on Cape Canaveral Air Force Station. New Horizons carries seven scientific instruments that will characterize the global geology and geomorphology of Pluto and its moon Charon, map their surface compositions and temperatures, and examine Pluto's complex atmosphere. After that, flybys of Kuiper Belt objects from even farther in the solar system may be undertaken in an extended mission. New Horizons is the first mission in NASA's New Frontiers program of medium-class planetary missions. The spacecraft, designed for NASA by the Johns Hopkins University Applied Physics Laboratory in Laurel, Md., will launch aboard a Lockheed Martin Atlas V rocket and fly by Pluto and Charon as early as summer 2015.
Position and orientation determination system and method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harpring, Lawrence J.; Farfan, Eduardo B.; Gordon, John R.
A position determination system and method is provided that may be used for obtaining position and orientation information of a detector in a contaminated room. The system includes a detector, a sensor operably coupled to the detector, and a motor coupled to the sensor to move the sensor around the detector. A CPU controls the operation of the motor to move the sensor around the detector and determines distance and angle data from the sensor to an object. The method includes moving a sensor around the detector and measuring distance and angle data from the sensor to an object atmore » incremental positions around the detector.« less
Coordination of multiple robot arms
NASA Technical Reports Server (NTRS)
Barker, L. K.; Soloway, D.
1987-01-01
Kinematic resolved-rate control from one robot arm is extended to the coordinated control of multiple robot arms in the movement of an object. The structure supports the general movement of one axis system (moving reference frame) with respect to another axis system (control reference frame) by one or more robot arms. The grippers of the robot arms do not have to be parallel or at any pre-disposed positions on the object. For multiarm control, the operator chooses the same moving and control reference frames for each of the robot arms. Consequently, each arm then moves as though it were carrying out the commanded motions by itself.
1996-06-20
Engineers at one of MSFC's vacuum chambers begin testing a microthruster model. The purpose of these tests are to collect sufficient data that will enabe NASA to develop microthrusters that will move the Space Shuttle, a future space station, or any other space related vehicle with the least amount of expended energy. When something is sent into outer space, the forces that try to pull it back to Earth (gravity) are very small so that it only requires a very small force to move very large objects. In space, a force equal to a paperclip can move an object as large as a car. Microthrusters are used to produce these small forces.
NASA Astrophysics Data System (ADS)
Kaplan, M. L.; van Cleve, J. E.; Alcock, C.
2003-12-01
Detection and characterization of the small bodies of the outer solar system presents unique challenges to terrestrial based sensing systems, principally the inverse 4th power decrease of reflected and thermal signals with target distance from the Sun. These limits are surpassed by new techniques [1,2,3] employing star-object occultation event sensing, which are capable of detecting sub-kilometer objects in the Kuiper Belt and Oort cloud. This poster will present an instrument and space mission concept based on adaptations of the NASA Discovery Kepler program currently in development at Ball Aerospace and Technologies Corp. Instrument technologies to enable this space science mission are being pursued and will be described. In particular, key attributes of an optimized payload include the ability to provide: 1) Coarse spectral resolution (using an objective spectrometer approach) 2) Wide FOV, simultaneous object monitoring (up to 150,000 stars employing select data regions within a large focal plane mosaic) 3) Fast temporal frame integration and readout architectures (10 to 50 msec for each monitored object) 4) Real-time, intelligent change detection processing (to limit raw data volumes) The Minor Body Surveyor combines the focal plane and processing technology elements into a densely packaged format to support general space mission issues of mass and power consumption, as well as telemetry resources. Mode flexibility is incorporated into the real-time processing elements to allow for either temporal (Occultations) or spatial (Moving targets) change detection. In addition, a basic image capture mode is provided for general pointing and field reference measurements. The overall space mission architecture is described as well. [1] M. E. Bailey. Can 'Invisible' Bodies be Observed in the Solar System. Nature, 259:290-+, January 1976. [2] T. S. Axelrod, C. Alcock, K. H. Cook, and H.-S. Park. A Direct Census of the Oort Cloud with a Robotic Telescope. In ASP Conf. Ser. 34: Robotic Telescopes in the 1990s, pages 171-181, 1992. [3] F. Roques and M. Moncuquet. A Detection Method for Small Kuiper Belt Objects: The Search for Stellar Occultations. Icarus, 147:530-544, October 2000.
The influence of visual motion on interceptive actions and perception.
Marinovic, Welber; Plooy, Annaliese M; Arnold, Derek H
2012-05-01
Visual information is an essential guide when interacting with moving objects, yet it can also be deceiving. For instance, motion can induce illusory position shifts, such that a moving ball can seem to have bounced past its true point of contact with the ground. Some evidence suggests illusory motion-induced position shifts bias pointing tasks to a greater extent than they do perceptual judgments. This, however, appears at odds with other findings and with our success when intercepting moving objects. Here we examined the accuracy of interceptive movements and of perceptual judgments in relation to simulated bounces. Participants were asked to intercept a moving disc at its bounce location by positioning a virtual paddle, and then to report where the disc had landed. Results showed that interceptive actions were accurate whereas perceptual judgments were inaccurate, biased in the direction of motion. Successful interceptions necessitated accurate information concerning both the location and timing of the bounce, so motor planning evidently had privileged access to an accurate forward model of bounce timing and location. This would explain why people can be accurate when intercepting a moving object, but lack insight into the accurate information that had guided their actions when asked to make a perceptual judgment. Copyright © 2012 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Podgursky, Michael; Ehlert, Mark; Lindsay, Jim; Wan, Yinmei
2016-01-01
Education leaders have expressed concern about educators' moving to different schools--within the same state or in another state--because these moves create costs for the home district and have potential impacts on the equitable distribution of effective educators among schools. However, many states do not routinely monitor mobility among…
Is There a Role for Educational Psychologists in Facilitating Managed Moves?
ERIC Educational Resources Information Center
Bagley, Christopher; Hallam, Susan
2017-01-01
The current research aimed to explore the extent to which school professionals and local authority staff perceived that there was a role for educational psychologists in the processes involved in implementing, monitoring and offering support to young people for whom a managed move was being arranged. The study was conducted in one English local…
Space-based visual attention: a marker of immature selective attention in toddlers?
Rivière, James; Brisson, Julie
2014-11-01
Various studies suggested that attentional difficulties cause toddlers' failure in some spatial search tasks. However, attention is not a unitary construct and this study investigated two attentional mechanisms: location selection (space-based attention) and object selection (object-based attention). We investigated how toddlers' attention is distributed in the visual field during a manual search task for objects moving out of sight, namely the moving boxes task. Results show that 2.5-year-olds who failed this task allocated more attention to the location of the relevant object than to the object itself. These findings suggest that in some manual search tasks the primacy of space-based attention over object-based attention could be a marker of immature selective attention in toddlers. © 2014 Wiley Periodicals, Inc.
Gaze control for an active camera system by modeling human pursuit eye movements
NASA Astrophysics Data System (ADS)
Toelg, Sebastian
1992-11-01
The ability to stabilize the image of one moving object in the presence of others by active movements of the visual sensor is an essential task for biological systems, as well as for autonomous mobile robots. An algorithm is presented that evaluates the necessary movements from acquired visual data and controls an active camera system (ACS) in a feedback loop. No a priori assumptions about the visual scene and objects are needed. The algorithm is based on functional models of human pursuit eye movements and is to a large extent influenced by structural principles of neural information processing. An intrinsic object definition based on the homogeneity of the optical flow field of relevant objects, i.e., moving mainly fronto- parallel, is used. Velocity and spatial information are processed in separate pathways, resulting in either smooth or saccadic sensor movements. The program generates a dynamic shape model of the moving object and focuses its attention to regions where the object is expected. The system proved to behave in a stable manner under real-time conditions in complex natural environments and manages general object motion. In addition it exhibits several interesting abilities well-known from psychophysics like: catch-up saccades, grouping due to coherent motion, and optokinetic nystagmus.
Real-time reliability measure-driven multi-hypothesis tracking using 2D and 3D features
NASA Astrophysics Data System (ADS)
Zúñiga, Marcos D.; Brémond, François; Thonnat, Monique
2011-12-01
We propose a new multi-target tracking approach, which is able to reliably track multiple objects even with poor segmentation results due to noisy environments. The approach takes advantage of a new dual object model combining 2D and 3D features through reliability measures. In order to obtain these 3D features, a new classifier associates an object class label to each moving region (e.g. person, vehicle), a parallelepiped model and visual reliability measures of its attributes. These reliability measures allow to properly weight the contribution of noisy, erroneous or false data in order to better maintain the integrity of the object dynamics model. Then, a new multi-target tracking algorithm uses these object descriptions to generate tracking hypotheses about the objects moving in the scene. This tracking approach is able to manage many-to-many visual target correspondences. For achieving this characteristic, the algorithm takes advantage of 3D models for merging dissociated visual evidence (moving regions) potentially corresponding to the same real object, according to previously obtained information. The tracking approach has been validated using video surveillance benchmarks publicly accessible. The obtained performance is real time and the results are competitive compared with other tracking algorithms, with minimal (or null) reconfiguration effort between different videos.
Kim, Seung-Cheol; Dong, Xiao-Bin; Kwon, Min-Woo; Kim, Eun-Soo
2013-05-06
A novel approach for fast generation of video holograms of three-dimensional (3-D) moving objects using a motion compensation-based novel-look-up-table (MC-N-LUT) method is proposed. Motion compensation has been widely employed in compression of conventional 2-D video data because of its ability to exploit high temporal correlation between successive video frames. Here, this concept of motion-compensation is firstly applied to the N-LUT based on its inherent property of shift-invariance. That is, motion vectors of 3-D moving objects are extracted between the two consecutive video frames, and with them motions of the 3-D objects at each frame are compensated. Then, through this process, 3-D object data to be calculated for its video holograms are massively reduced, which results in a dramatic increase of the computational speed of the proposed method. Experimental results with three kinds of 3-D video scenarios reveal that the average number of calculated object points and the average calculation time for one object point of the proposed method, have found to be reduced down to 86.95%, 86.53% and 34.99%, 32.30%, respectively compared to those of the conventional N-LUT and temporal redundancy-based N-LUT (TR-N-LUT) methods.
A landmark effect in the perceived displacement of objects.
Higgins, J Stephen; Wang, Ranxiao Frances
2010-01-01
Perceiving the displacement of an object after a visual distraction is an essential ability to interact with the world. Previous research has shown a bias to perceive the first object seen after a saccade as stable while the second one moving (landmark effect). The present study examines the generality and nature of this phenomenon. The landmark effect was observed in the absence of eye movements, when the two objects were obscured by a blank screen, a moving-pattern mask, or simply disappeared briefly before reappearing one after the other. The first reappearing object was not required to remain visible while the second object reappeared to induce the bias. The perceived direction of the displacement was mainly determined by the relative displacement of the two objects, suggesting that the landmark effect is primarily due to a landmark calibration mechanism.
2009-12-01
facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . We use the fact that all the...a moving platform, we will have to naturally and effectively handle obvious motion parallax and object occlusions in order to be able to detect ...facilitating reliable stereo matching, occlusion handling, accurate 3D reconstruction and robust moving target detection . Based on the above two
Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor.
Huang, Lvwen; Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing
2017-08-23
Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields.
Real-Time Motion Tracking for Indoor Moving Sphere Objects with a LiDAR Sensor
Chen, Siyuan; Zhang, Jianfeng; Cheng, Bang; Liu, Mingqing
2017-01-01
Object tracking is a crucial research subfield in computer vision and it has wide applications in navigation, robotics and military applications and so on. In this paper, the real-time visualization of 3D point clouds data based on the VLP-16 3D Light Detection and Ranging (LiDAR) sensor is achieved, and on the basis of preprocessing, fast ground segmentation, Euclidean clustering segmentation for outliers, View Feature Histogram (VFH) feature extraction, establishing object models and searching matching a moving spherical target, the Kalman filter and adaptive particle filter are used to estimate in real-time the position of a moving spherical target. The experimental results show that the Kalman filter has the advantages of high efficiency while adaptive particle filter has the advantages of high robustness and high precision when tested and validated on three kinds of scenes under the condition of target partial occlusion and interference, different moving speed and different trajectories. The research can be applied in the natural environment of fruit identification and tracking, robot navigation and control and other fields. PMID:28832520
2016-01-01
Particle therapy of moving targets is still a great challenge. The motion of organs situated in the thorax and abdomen strongly affects the precision of proton and carbon ion radiotherapy. The motion is responsible for not only the dislocation of the tumour but also the alterations in the internal density along the beam path, which influence the range of particle beams. Furthermore, in case of pencil beam scanning, there is an interference between the target movement and dynamic beam delivery. This review presents the strategies for tumour motion monitoring and moving target irradiation in the context of hadron therapy. Methods enabling the direct determination of tumour position (fluoroscopic imaging of implanted radio-opaque fiducial markers, electromagnetic detection of inserted transponders and ultrasonic tumour localization systems) are presented. Attention is also drawn to the techniques which use external surrogate motion for an indirect estimation of target displacement during irradiation. The role of respiratory-correlated CT [four-dimensional CT (4DCT)] in the determination of motion pattern prior to the particle treatment is also considered. An essential part of the article is the review of the main approaches to moving target irradiation in hadron therapy: gating, rescanning (repainting), gated rescanning and tumour tracking. The advantages, drawbacks and development trends of these methods are discussed. The new accelerators, called “cyclinacs”, are presented, because their application to particle therapy will allow making a breakthrough in the 4D spot scanning treatment of moving organs. PMID:27376637
Target-locking acquisition with real-time confocal (TARC) microscopy.
Lu, Peter J; Sims, Peter A; Oki, Hidekazu; Macarthur, James B; Weitz, David A
2007-07-09
We present a real-time target-locking confocal microscope that follows an object moving along an arbitrary path, even as it simultaneously changes its shape, size and orientation. This Target-locking Acquisition with Realtime Confocal (TARC) microscopy system integrates fast image processing and rapid image acquisition using a Nipkow spinning-disk confocal microscope. The system acquires a 3D stack of images, performs a full structural analysis to locate a feature of interest, moves the sample in response, and then collects the next 3D image stack. In this way, data collection is dynamically adjusted to keep a moving object centered in the field of view. We demonstrate the system's capabilities by target-locking freely-diffusing clusters of attractive colloidal particles, and activelytransported quantum dots (QDs) endocytosed into live cells free to move in three dimensions, for several hours. During this time, both the colloidal clusters and live cells move distances several times the length of the imaging volume.
Hogendoorn, Hinze; Burkitt, Anthony N
2018-05-01
Due to the delays inherent in neuronal transmission, our awareness of sensory events necessarily lags behind the occurrence of those events in the world. If the visual system did not compensate for these delays, we would consistently mislocalize moving objects behind their actual position. Anticipatory mechanisms that might compensate for these delays have been reported in animals, and such mechanisms have also been hypothesized to underlie perceptual effects in humans such as the Flash-Lag Effect. However, to date no direct physiological evidence for anticipatory mechanisms has been found in humans. Here, we apply multivariate pattern classification to time-resolved EEG data to investigate anticipatory coding of object position in humans. By comparing the time-course of neural position representation for objects in both random and predictable apparent motion, we isolated anticipatory mechanisms that could compensate for neural delays when motion trajectories were predictable. As well as revealing an early neural position representation (lag 80-90 ms) that was unaffected by the predictability of the object's trajectory, we demonstrate a second neural position representation at 140-150 ms that was distinct from the first, and that was pre-activated ahead of the moving object when it moved on a predictable trajectory. The latency advantage for predictable motion was approximately 16 ± 2 ms. To our knowledge, this provides the first direct experimental neurophysiological evidence of anticipatory coding in human vision, revealing the time-course of predictive mechanisms without using a spatial proxy for time. The results are numerically consistent with earlier animal work, and suggest that current models of spatial predictive coding in visual cortex can be effectively extended into the temporal domain. Copyright © 2018 Elsevier Inc. All rights reserved.
Background: In this commentary we present the findings from an international consortium on fish toxicogenomics sponsored by the UK Natural Environment Research Council (NERC) with a remit of moving omic technologies into chemical risk assessment and environmental monitoring. Obj...
Woskov, Paul P.; Hadidi, Kamal
2003-01-01
In embodiments, spectroscopic monitor monitors modulated light signals to detect low levels of contaminants and other compounds in the presence of background interference. The monitor uses a spectrometer that includes a transmissive modulator capable of causing different frequency ranges to move onto and off of the detector. The different ranges can include those with the desired signal and those selected to subtract background contributions from those with the desired signal. Embodiments of the system are particularly useful for monitoring metal concentrations in combustion effluent.
Moving object detection via low-rank total variation regularization
NASA Astrophysics Data System (ADS)
Wang, Pengcheng; Chen, Qian; Shao, Na
2016-09-01
Moving object detection is a challenging task in video surveillance. Recently proposed Robust Principal Component Analysis (RPCA) can recover the outlier patterns from the low-rank data under some mild conditions. However, the l-penalty in RPCA doesn't work well in moving object detection because the irrepresentable condition is often not satisfied. In this paper, a method based on total variation (TV) regularization scheme is proposed. In our model, image sequences captured with a static camera are highly related, which can be described using a low-rank matrix. Meanwhile, the low-rank matrix can absorb background motion, e.g. periodic and random perturbation. The foreground objects in the sequence are usually sparsely distributed and drifting continuously, and can be treated as group outliers from the highly-related background scenes. Instead of l-penalty, we exploit the total variation of the foreground. By minimizing the total variation energy, the outliers tend to collapse and finally converge to be the exact moving objects. The TV-penalty is superior to the l-penalty especially when the outlier is in the majority for some pixels, and our method can estimate the outlier explicitly with less bias but higher variance. To solve the problem, a joint optimization function is formulated and can be effectively solved through the inexact Augmented Lagrange Multiplier (ALM) method. We evaluate our method along with several state-of-the-art approaches in MATLAB. Both qualitative and quantitative results demonstrate that our proposed method works effectively on a large range of complex scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrington, David Bradley; Monayem, A. K. M.; Mazumder, H.
2015-03-05
A three-dimensional finite element method for the numerical simulations of fluid flow in domains containing moving rigid objects or boundaries is developed. The method falls into the general category of Arbitrary Lagrangian Eulerian methods; it is based on a fixed mesh that is locally adapted in the immediate vicinity of the moving interfaces and reverts to its original shape once the moving interfaces go past the elements. The moving interfaces are defined by separate sets of marker points so that the global mesh is independent of interface movement and the possibility of mesh entanglement is eliminated. The results is amore » fully robust formulation capable of calculating on domains of complex geometry with moving boundaries or devises that can also have a complex geometry without danger of the mesh becoming unsuitable due to its continuous deformation thus eliminating the need for repeated re-meshing and interpolation. Moreover, the boundary conditions on the interfaces are imposed exactly. This work is intended to support the internal combustion engines simulator KIVA developed at Los Alamos National Laboratories. The model's capabilities are illustrated through application to incompressible flows in different geometrical settings that show the robustness and flexibility of the technique to perform simulations involving moving boundaries in a three-dimensional domain.« less
Yamamoto, Chisato; Furuta, Keisuke; Taki, Michihiro; Morisaka, Tadamichi
2014-01-01
Several terrestrial animals and delphinids manipulate objects in a tactile manner, using parts of their bodies, such as their mouths or hands. In this paper, we report that bottlenose dolphins (Tursiops truncatus) manipulate objects not by direct bodily contact, but by spontaneous water flow. Three of four dolphins at Suma Aqualife Park performed object manipulation with food. The typical sequence of object manipulation consisted of a three step procedure. First, the dolphins released the object from the sides of their mouths while assuming a head-down posture near the floor. They then manipulated the object around their mouths and caught it. Finally, they ceased to engage in their head-down posture and started to swim. When the dolphins moved the object, they used the water current in the pool or moved their head. These results showed that dolphins manipulate objects using movements that do not directly involve contact between a body part and the object. In the event the dolphins dropped the object on the floor, they lifted it by making water flow in one of three methods: opening and closing their mouths repeatedly, moving their heads lengthwise, or making circular head motions. This result suggests that bottlenose dolphins spontaneously change their environment to manipulate objects. The reason why aquatic animals like dolphins do object manipulation by changing their environment but terrestrial animals do not may be that the viscosity of the aquatic environment is much higher than it is in terrestrial environments. This is the first report thus far of any non-human mammal engaging in object manipulation using several methods to change their environment. PMID:25250625
NASA Astrophysics Data System (ADS)
Black, Christopher; McMichael, Ian; Riggs, Lloyd
2005-06-01
Electromagnetic induction (EMI) sensors and magnetometers have successfully detected surface laid, buried, and visually obscured metallic objects. Potential military activities could require detection of these objects at some distance from a moving vehicle in the presence of metallic clutter. Results show that existing EMI sensors have limited range capabilities and suffer from false alarms due to clutter. This paper presents results of an investigation of an EMI sensor designed for detecting large metallic objects on a moving platform in a high clutter environment. The sensor was developed by the U.S. Army RDECOM CERDEC NVESD in conjunction with the Johns Hopkins University Applied Physics Laboratory.
Perceiving environmental structure from optical motion
NASA Technical Reports Server (NTRS)
Lappin, Joseph S.
1991-01-01
Generally speaking, one of the most important sources of optical information about environmental structure is known to be the deforming optical patterns produced by the movements of the observer (pilot) or environmental objects. As an observer moves through a rigid environment, the projected optical patterns of environmental objects are systematically transformed according to their orientations and positions in 3D space relative to those of the observer. The detailed characteristics of these deforming optical patterns carry information about the 3D structure of the objects and about their locations and orientations relative to those of the observer. The specific geometrical properties of moving images that may constitute visually detected information about the shapes and locations of environmental objects is examined.
A Novel Method of Localization for Moving Objects with an Alternating Magnetic Field
Gao, Xiang; Yan, Shenggang; Li, Bin
2017-01-01
Magnetic detection technology has wide applications in the fields of geological exploration, biomedical treatment, wreck removal and localization of unexploded ordinance. A large number of methods have been developed to locate targets with static magnetic fields, however, the relation between the problem of localization of moving objectives with alternating magnetic fields and the localization with a static magnetic field is rarely studied. A novel method of target localization based on coherent demodulation was proposed in this paper. The problem of localization of moving objects with an alternating magnetic field was transformed into the localization with a static magnetic field. The Levenberg-Marquardt (L-M) algorithm was applied to calculate the position of the target with magnetic field data measured by a single three-component magnetic sensor. Theoretical simulation and experimental results demonstrate the effectiveness of the proposed method. PMID:28430153
Moving shadows contribute to the corridor illusion in a chimpanzee (Pan troglodytes).
Imura, Tomoko; Tomonaga, Masaki
2009-08-01
Previous studies have reported that backgrounds depicting linear perspective and texture gradients influence relative size discrimination in nonhuman animals (known as the "corridor illusion"), but research has not yet identified the other kinds of depth cues contributing to the corridor illusion. This study examined the effects of linear perspective and shadows on the responses of a chimpanzee (Pan troglodytes) to the corridor illusion. The performance of the chimpanzee was worse when a smaller object was presented at the farther position on a background reflecting a linear perspective, implying that the corridor illusion was replicated in the chimpanzee (Imura, Tomonaga, & Yagi, 2008). The extent of the illusion changed as a function of the position of the shadows cast by the objects only when the shadows were moving in synchrony with the objects. These findings suggest that moving shadows and linear perspective contributed to the corridor illusion in a chimpanzee. Copyright 2009 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Lopez, Alejandro; Noe, Miquel; Fernandez, Gabriel
2004-10-01
The GMF4iTV project (Generic Media Framework for Interactive Television) is an IST European project that consists of an end-to-end broadcasting platform providing interactivity on heterogeneous multimedia devices such as Set-Top-Boxes and PCs according to the Multimedia Home Platform (MHP) standard from DVB. This platform allows the content providers to create enhanced audiovisual contents with a degree of interactivity at moving object level or shot change from a video. The end user is then able to interact with moving objects from the video or individual shots allowing the enjoyment of additional contents associated to them (MHP applications, HTML pages, JPEG, MPEG4 files...). This paper focus the attention to the issues related to metadata and content transmission, synchronization, signaling and bitrate allocation of the GMF4iTV project.
Evaluation of a depth sensor for weights estimation of growing and finishing pigs
USDA-ARS?s Scientific Manuscript database
A method of continuously monitoring animal weight would aid producers by ensuring all pigs are gaining weight and would increase the precision of marketing pigs. Electronically monitoring weight without moving the pigs to the scale would eliminate a source of stress. Therefore, the development of me...
Advanced Research into Imaging of Moving Targets
2009-12-01
Postgraduate School Monterey, CA 93943-5000 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING / MONITORING AGENCY NAME(S) AND ADDRESS(ES...N/A 10. SPONSORING/ MONITORING AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do...19 A. SYNTHETIC APERTURE IMAGING .......................................................19 B. RADON
Eye tracking a self-moved target with complex hand-target dynamics
Landelle, Caroline; Montagnini, Anna; Madelain, Laurent
2016-01-01
Previous work has shown that the ability to track with the eye a moving target is substantially improved when the target is self-moved by the subject's hand compared with when being externally moved. Here, we explored a situation in which the mapping between hand movement and target motion was perturbed by simulating an elastic relationship between the hand and target. Our objective was to determine whether the predictive mechanisms driving eye-hand coordination could be updated to accommodate this complex hand-target dynamics. To fully appreciate the behavioral effects of this perturbation, we compared eye tracking performance when self-moving a target with a rigid mapping (simple) and a spring mapping as well as when the subject tracked target trajectories that he/she had previously generated when using the rigid or spring mapping. Concerning the rigid mapping, our results confirmed that smooth pursuit was more accurate when the target was self-moved than externally moved. In contrast, with the spring mapping, eye tracking had initially similar low spatial accuracy (though shorter temporal lag) in the self versus externally moved conditions. However, within ∼5 min of practice, smooth pursuit improved in the self-moved spring condition, up to a level similar to the self-moved rigid condition. Subsequently, when the mapping unexpectedly switched from spring to rigid, the eye initially followed the expected target trajectory and not the real one, thereby suggesting that subjects used an internal representation of the new hand-target dynamics. Overall, these results emphasize the stunning adaptability of smooth pursuit when self-maneuvering objects with complex dynamics. PMID:27466129
Small Arrays for Seismic Intruder Detections: A Simulation Based Experiment
NASA Astrophysics Data System (ADS)
Pitarka, A.
2014-12-01
Seismic sensors such as geophones and fiber optic have been increasingly recognized as promising technologies for intelligence surveillance, including intruder detection and perimeter defense systems. Geophone arrays have the capability to provide cost effective intruder detection in protecting assets with large perimeters. A seismic intruder detection system uses one or multiple arrays of geophones design to record seismic signals from footsteps and ground vehicles. Using a series of real-time signal processing algorithms the system detects, classify and monitors the intruder's movement. We have carried out numerical experiments to demonstrate the capability of a seismic array to detect moving targets that generate seismic signals. The seismic source is modeled as a vertical force acting on the ground that generates continuous impulsive seismic signals with different predominant frequencies. Frequency-wave number analysis of the synthetic array data was used to demonstrate the array's capability at accurately determining intruder's movement direction. The performance of the array was also analyzed in detecting two or more objects moving at the same time. One of the drawbacks of using a single array system is its inefficiency at detecting seismic signals deflected by large underground objects. We will show simulation results of the effect of an underground concrete block at shielding the seismic signal coming from an intruder. Based on simulations we found that multiple small arrays can greatly improve the system's detection capability in the presence of underground structures. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344
Ultrafast dark-field surface inspection with hybrid-dispersion laser scanning
NASA Astrophysics Data System (ADS)
Yazaki, Akio; Kim, Chanju; Chan, Jacky; Mahjoubfar, Ata; Goda, Keisuke; Watanabe, Masahiro; Jalali, Bahram
2014-06-01
High-speed surface inspection plays an important role in industrial manufacturing, safety monitoring, and quality control. It is desirable to go beyond the speed limitation of current technologies for reducing manufacturing costs and opening a new window onto a class of applications that require high-throughput sensing. Here, we report a high-speed dark-field surface inspector for detection of micrometer-sized surface defects that can travel at a record high speed as high as a few kilometers per second. This method is based on a modified time-stretch microscope that illuminates temporally and spatially dispersed laser pulses on the surface of a fast-moving object and detects scattered light from defects on the surface with a sensitive photodetector in a dark-field configuration. The inspector's ability to perform ultrafast dark-field surface inspection enables real-time identification of difficult-to-detect features on weakly reflecting surfaces and hence renders the method much more practical than in the previously demonstrated bright-field configuration. Consequently, our inspector provides nearly 1000 times higher scanning speed than conventional inspectors. To show our method's broad utility, we demonstrate real-time inspection of the surface of various objects (a non-reflective black film, transparent flexible film, and reflective hard disk) for detection of 10 μm or smaller defects on a moving target at 20 m/s within a scan width of 25 mm at a scan rate of 90.9 MHz. Our method holds promise for improving the cost and performance of organic light-emitting diode displays for next-generation smart phones, lithium-ion batteries for green electronics, and high-efficiency solar cells.
Vertex Movement for Mission Status Graphics: A Polar-Star Display
NASA Technical Reports Server (NTRS)
Trujillo, Anna
2002-01-01
Humans are traditionally bad monitors, especially over long periods of time on reliable systems, and they are being called upon to do this more and more as systems become further automated. Because of this, there is a need to find a way to display the monitoring information to the human operator in such a way that he can notice pertinent deviations in a timely manner. One possible solution is to use polar-star displays that will show deviations from normal in a more salient manner. A polar-star display uses a polygon's vertices to report values. An important question arises, though, of how the vertices should move. This experiment investigated two particular issues of how the vertices should move: (1) whether the movement of the vertices should be continuous or discrete and (2) whether the parameters that made up each vertex should always move in one direction regardless of parameter sign or move in both directions indicating parameter sign. The results indicate that relative movement direction is best. Subjects performed better with this movement type and they subjectively preferred it to the absolute movement direction. As for movement type, no strong preferences were shown.
A Middleware with Comprehensive Quality of Context Support for the Internet of Things Applications
Gomes, Berto de Tácio Pereira; Muniz, Luiz Carlos Melo; dos Santos, Davi Viana; Lopes, Rafael Fernandes; Coutinho, Luciano Reis; Carvalho, Felipe Oliveira; Endler, Markus
2017-01-01
Context aware systems are able to adapt their behavior according to the environment in which the user is. They can be integrated into an Internet of Things (IoT) infrastructure, allowing a better perception of the user’s physical environment by collecting context data from sensors embedded in devices known as smart objects. An IoT extension called the Internet of Mobile Things (IoMT) suggests new scenarios in which smart objects and IoT gateways can move autonomously or be moved easily. In a comprehensive view, Quality of Context (QoC) is a term that can express quality requirements of context aware applications. These requirements can be those related to the quality of information provided by the sensors (e.g., accuracy, resolution, age, validity time) or those referring to the quality of the data distribution service (e.g, reliability, delay, delivery time). Some functionalities of context aware applications and/or decision-making processes of these applications and their users depend on the level of quality of context available, which tend to vary over time for various reasons. Reviewing the literature, it is possible to verify that the quality of context support provided by IoT-oriented middleware systems still has limitations in relation to at least four relevant aspects: (i) quality of context provisioning; (ii) quality of context monitoring; (iii) support for heterogeneous device and technology management; (iv) support for reliable data delivery in mobility scenarios. This paper presents two main contributions: (i) a state-of-the-art survey specifically aimed at analyzing the middleware with quality of context support and; (ii) a new middleware with comprehensive quality of context support for Internet of Things Applications. The proposed middleware was evaluated and the results are presented and discussed in this article, which also shows a case study involving the development of a mobile remote patient monitoring application that was developed using the proposed middleware. This case study highlights how middleware components were used to meet the quality of context requirements of the application. In addition, the proposed middleware was compared to other solutions in the literature. PMID:29292791
A Middleware with Comprehensive Quality of Context Support for the Internet of Things Applications.
Gomes, Berto de Tácio Pereira; Muniz, Luiz Carlos Melo; da Silva E Silva, Francisco José; Dos Santos, Davi Viana; Lopes, Rafael Fernandes; Coutinho, Luciano Reis; Carvalho, Felipe Oliveira; Endler, Markus
2017-12-08
Context aware systems are able to adapt their behavior according to the environment in which the user is. They can be integrated into an Internet of Things (IoT) infrastructure, allowing a better perception of the user's physical environment by collecting context data from sensors embedded in devices known as smart objects. An IoT extension called the Internet of Mobile Things (IoMT) suggests new scenarios in which smart objects and IoT gateways can move autonomously or be moved easily. In a comprehensive view, Quality of Context (QoC) is a term that can express quality requirements of context aware applications. These requirements can be those related to the quality of information provided by the sensors (e.g., accuracy, resolution, age, validity time) or those referring to the quality of the data distribution service (e.g, reliability, delay, delivery time). Some functionalities of context aware applications and/or decision-making processes of these applications and their users depend on the level of quality of context available, which tend to vary over time for various reasons. Reviewing the literature, it is possible to verify that the quality of context support provided by IoT-oriented middleware systems still has limitations in relation to at least four relevant aspects: (i) quality of context provisioning; (ii) quality of context monitoring; (iii) support for heterogeneous device and technology management; (iv) support for reliable data delivery in mobility scenarios. This paper presents two main contributions: (i) a state-of-the-art survey specifically aimed at analyzing the middleware with quality of context support and; (ii) a new middleware with comprehensive quality of context support for Internet of Things Applications. The proposed middleware was evaluated and the results are presented and discussed in this article, which also shows a case study involving the development of a mobile remote patient monitoring application that was developed using the proposed middleware. This case study highlights how middleware components were used to meet the quality of context requirements of the application. In addition, the proposed middleware was compared to other solutions in the literature.
Distributed proximity sensor system having embedded light emitters and detectors
NASA Technical Reports Server (NTRS)
Lee, Sukhan (Inventor)
1990-01-01
A distributed proximity sensor system is provided with multiple photosensitive devices and light emitters embedded on the surface of a robot hand or other moving member in a geometric pattern. By distributing sensors and emitters capable of detecting distances and angles to points on the surface of an object from known points in the geometric pattern, information is obtained for achieving noncontacting shape and distance perception, i.e., for automatic determination of the object's shape, direction and distance, as well as the orientation of the object relative to the robot hand or other moving member.
Magnetic levitation system for moving objects
Post, R.F.
1998-03-03
Repelling magnetic forces are produced by the interaction of a flux-concentrated magnetic field (produced by permanent magnets or electromagnets) with an inductively loaded closed electric circuit. When one such element moves with respect to the other, a current is induced in the circuit. This current then interacts back on the field to produce a repelling force. These repelling magnetic forces are applied to magnetically levitate a moving object such as a train car. The power required to levitate a train of such cars is drawn from the motional energy of the train itself, and typically represents only a percent or two of the several megawatts of power required to overcome aerodynamic drag at high speeds. 7 figs.
Magnetic levitation system for moving objects
Post, Richard F.
1998-01-01
Repelling magnetic forces are produced by the interaction of a flux-concentrated magnetic field (produced by permanent magnets or electromagnets) with an inductively loaded closed electric circuit. When one such element moves with respect to the other, a current is induced in the circuit. This current then interacts back on the field to produce a repelling force. These repelling magnetic forces are applied to magnetically levitate a moving object such as a train car. The power required to levitate a train of such cars is drawn from the motional energy of the train itself, and typically represents only a percent or two of the several megawatts of power required to overcome aerodynamic drag at high speeds.
ERIC Educational Resources Information Center
Chihak, Benjamin J.; Plumert, Jodie M.; Ziemer, Christine J.; Babu, Sabarish; Grechkin, Timofey; Cremer, James F.; Kearney, Joseph K.
2010-01-01
Two experiments examined how 10- and 12-year-old children and adults intercept moving gaps while bicycling in an immersive virtual environment. Participants rode an actual bicycle along a virtual roadway. At 12 test intersections, participants attempted to pass through a gap between 2 moving, car-sized blocks without stopping. The blocks were…
Rhetorical Moves in Problem Statement Section of Iranian EFL Postgraduate Students' Theses
ERIC Educational Resources Information Center
Nimehchisalem, Vahid; Tarvirdizadeh, Zahra; Paidary, Sara Sayed; Binti Mat Hussin, Nur Izyan Syamimi
2016-01-01
The Problem Statement (PS) section of a thesis, usually a subsection of the first chapter, is supposed to justify the objectives of the study. Postgraduate students are often ignorant of the rhetorical moves that they are expected to make in their PS. This descriptive study aimed to explore the rhetorical moves of the PS in Iranian master's (MA)…
Sheridan, Heather; Reingold, Eyal M
2013-01-01
In a wide range of problem-solving settings, the presence of a familiar solution can block the discovery of better solutions (i.e., the Einstellung effect). To investigate this effect, we monitored the eye movements of expert and novice chess players while they solved chess problems that contained a familiar move (i.e., the Einstellung move), as well as an optimal move that was located in a different region of the board. When the Einstellung move was an advantageous (but suboptimal) move, both the expert and novice chess players who chose the Einstellung move continued to look at this move throughout the trial, whereas the subset of expert players who chose the optimal move were able to gradually disengage their attention from the Einstellung move. However, when the Einstellung move was a blunder, all of the experts and the majority of the novices were able to avoid selecting the Einstellung move, and both the experts and novices gradually disengaged their attention from the Einstellung move. These findings shed light on the boundary conditions of the Einstellung effect, and provide convergent evidence for Bilalić, McLeod, & Gobet (2008)'s conclusion that the Einstellung effect operates by biasing attention towards problem features that are associated with the familiar solution rather than the optimal solution.
SU-G-BRA-14: Dose in a Rigidly Moving Phantom with Jaw and MLC Compensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, E; Lucas, D
Purpose: To validate dose calculation for a rigidly moving object with jaw motion and MLC shifts to compensate for the motion in a TomoTherapy™ treatment delivery. Methods: An off-line version of the TomoTherapy dose calculator was extended to perform dose calculations for rigidly moving objects. A variety of motion traces were added to treatment delivery plans, along with corresponding jaw compensation and MLC shift compensation profiles. Jaw compensation profiles were calculated by shifting the jaws such that the center of the treatment beam moved by an amount equal to the motion in the longitudinal direction. Similarly, MLC compensation profiles weremore » calculated by shifting the MLC leaves by an amount that most closely matched the motion in the transverse direction. The same jaw and MLC compensation profiles were used during simulated treatment deliveries on a TomoTherapy system, and film measurements were obtained in a rigidly moving phantom. Results: The off-line TomoTherapy dose calculator accurately predicted dose profiles for a rigidly moving phantom along with jaw motion and MLC shifts to compensate for the motion. Calculations matched film measurements to within 2%/1 mm. Jaw and MLC compensation substantially reduced the discrepancy between the delivered dose distribution and the calculated dose with no motion. For axial motion, the compensated dose matched the no-motion dose within 2%/1mm. For transverse motion, the dose matched within 2%/3mm (approximately half the width of an MLC leaf). Conclusion: The off-line TomoTherapy dose calculator accurately computes dose delivered to a rigidly moving object, and accurately models the impact of moving the jaws and shifting the MLC leaf patterns to compensate for the motion. Jaw tracking and MLC leaf shifting can effectively compensate for the dosimetric impact of motion during a TomoTherapy treatment delivery.« less
Activation of the Human MT Complex by Motion in Depth Induced by a Moving Cast Shadow
Katsuyama, Narumi; Usui, Nobuo; Taira, Masato
2016-01-01
A moving cast shadow is a powerful monocular depth cue for motion perception in depth. For example, when a cast shadow moves away from or toward an object in a two-dimensional plane, the object appears to move toward or away from the observer in depth, respectively, whereas the size and position of the object are constant. Although the cortical mechanisms underlying motion perception in depth by cast shadow are unknown, the human MT complex (hMT+) is likely involved in the process, as it is sensitive to motion in depth represented by binocular depth cues. In the present study, we examined this possibility by using a functional magnetic resonance imaging (fMRI) technique. First, we identified the cortical regions sensitive to the motion of a square in depth represented via binocular disparity. Consistent with previous studies, we observed significant activation in the bilateral hMT+, and defined functional regions of interest (ROIs) there. We then investigated the activity of the ROIs during observation of the following stimuli: 1) a central square that appeared to move back and forth via a moving cast shadow (mCS); 2) a segmented and scrambled cast shadow presented beside the square (sCS); and 3) no cast shadow (nCS). Participants perceived motion of the square in depth in the mCS condition only. The activity of the hMT+ was significantly higher in the mCS compared with the sCS and nCS conditions. Moreover, the hMT+ was activated equally in both hemispheres in the mCS condition, despite presentation of the cast shadow in the bottom-right quadrant of the stimulus. Perception of the square moving in depth across visual hemifields may be reflected in the bilateral activation of the hMT+. We concluded that the hMT+ is involved in motion perception in depth induced by moving cast shadow and by binocular disparity. PMID:27597999
[Management by objectives: an experience by transfusion and immunology service in Rabat].
Essakalli, M; Atouf, O; Ouadghiri, S; Bouayad, A; Drissi, A; Sbain, K; Sakri, L; Benseffaj, N; Brick, C
2013-09-01
The management by objectives method has become highly used in health management. In this context, the blood transfusion and haemovigilance service has been chosen for a pilot study by the Head Department of the Ibn Sina Hospital in Rabat. This study was conducted from 2009 to 2011, in four steps. The first one consisted in preparing human resources (information and training), identifying the strengths and weaknesses of the service and the identification and classification of the service's users. The second step was the elaboration of the terms of the contract, which helped to determine two main strategic objectives: to strengthen the activities of the service and move towards the "status of reference." Each strategic objective had been declined in operational objectives, then in actions and the means required for the implementation of each action. The third step was the implementation of each action (service, head department) so as to comply with the terms of the contract as well as to meet the deadlines. Based on assessment committees, the last step consisted in the evaluation process. This evaluation was performed using monitoring indicators and showed that management by objectives enabled the Service to reach the "clinical governance level", to optimize its human and financial resources and to reach the level of "national laboratory of reference in histocompatibility". The scope of this paper is to describe the four steps of this pilot study and to explain the usefulness of the management by objectives method in health management. Copyright © 2013 Elsevier Masson SAS. All rights reserved.
Multidimensional Circadian Monitoring by Wearable Biosensors in Parkinson’s Disease
Madrid-Navarro, Carlos J.; Escamilla-Sevilla, Francisco; Mínguez-Castellanos, Adolfo; Campos, Manuel; Ruiz-Abellán, Fernando; Madrid, Juan A.; Rol, M. A.
2018-01-01
Parkinson’s disease (PD) is associated with several non-motor symptoms that may precede the diagnosis and constitute a major source of frailty in this population. The digital era in health care has open up new prospects to move forward from the qualitative and subjective scoring for PD with the use of new wearable biosensors that enable frequent quantitative, reliable, repeatable, and multidimensional measurements to be made with minimal discomfort and inconvenience for patients. A cross-sectional study was conducted to test a wrist-worn device combined with machine-learning processing to detect circadian rhythms of sleep, motor, and autonomic disruption, which can be suitable for the objective and non-invasive evaluation of PD patients. Wrist skin temperature, motor acceleration, time in movement, hand position, light exposure, and sleep rhythms were continuously measured in 12 PD patients and 12 age-matched healthy controls for seven consecutive days using an ambulatory circadian monitoring device (ACM). Our study demonstrates that a multichannel ACM device collects reliable and complementary information from motor (acceleration and time in movement) and common non-motor (sleep and skin temperature rhythms) features frequently disrupted in PD. Acceleration during the daytime (as indicative of motor impairment), time in movement during sleep (representative of fragmented sleep) and their ratio (A/T) are the best indexes to objectively characterize the most common symptoms of PD, allowing for a reliable and easy scoring method to evaluate patients. Chronodisruption score, measured by the integrative algorithm known as the circadian function index is directly linked to a low A/T score. Our work attempts to implement innovative technologies based on wearable, multisensor, objective, and easy-to-use devices, to quantify PD circadian rhythms in huge populations over extended periods of time, while controlling at the same time exposure to exogenous circadian synchronizers. PMID:29632508
Independent motion detection with a rival penalized adaptive particle filter
NASA Astrophysics Data System (ADS)
Becker, Stefan; Hübner, Wolfgang; Arens, Michael
2014-10-01
Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.
Design of a Covert RFID Tag Network for Target Discovery and Target Information Routing
Pan, Qihe; Narayanan, Ram M.
2011-01-01
Radio frequency identification (RFID) tags are small electronic devices working in the radio frequency range. They use wireless radio communications to automatically identify objects or people without the need for line-of-sight or contact, and are widely used in inventory tracking, object location, environmental monitoring. This paper presents a design of a covert RFID tag network for target discovery and target information routing. In the design, a static or very slowly moving target in the field of RFID tags transmits a distinct pseudo-noise signal, and the RFID tags in the network collect the target information and route it to the command center. A map of each RFID tag’s location is saved at command center, which can determine where a RFID tag is located based on each RFID tag’s ID. We propose the target information collection method with target association and clustering, and we also propose the information routing algorithm within the RFID tag network. The design and operation of the proposed algorithms are illustrated through examples. Simulation results demonstrate the effectiveness of the design. PMID:22163693
Design of a covert RFID tag network for target discovery and target information routing.
Pan, Qihe; Narayanan, Ram M
2011-01-01
Radio frequency identification (RFID) tags are small electronic devices working in the radio frequency range. They use wireless radio communications to automatically identify objects or people without the need for line-of-sight or contact, and are widely used in inventory tracking, object location, environmental monitoring. This paper presents a design of a covert RFID tag network for target discovery and target information routing. In the design, a static or very slowly moving target in the field of RFID tags transmits a distinct pseudo-noise signal, and the RFID tags in the network collect the target information and route it to the command center. A map of each RFID tag's location is saved at command center, which can determine where a RFID tag is located based on each RFID tag's ID. We propose the target information collection method with target association and clustering, and we also propose the information routing algorithm within the RFID tag network. The design and operation of the proposed algorithms are illustrated through examples. Simulation results demonstrate the effectiveness of the design.
James, S. R.; Knox, H. A.; Abbott, R. E.; ...
2017-04-13
Cross correlations of seismic noise can potentially record large changes in subsurface velocity due to permafrost dynamics and be valuable for long-term Arctic monitoring. We applied seismic interferometry, using moving window cross-spectral analysis (MWCS), to 2 years of ambient noise data recorded in central Alaska to investigate whether seismic noise could be used to quantify relative velocity changes due to seasonal active-layer dynamics. The large velocity changes (>75%) between frozen and thawed soil caused prevalent cycle-skipping which made the method unusable in this setting. We developed an improved MWCS procedure which uses a moving reference to measure daily velocity variationsmore » that are then accumulated to recover the full seasonal change. This approach reduced cycle-skipping and recovered a seasonal trend that corresponded well with the timing of active-layer freeze and thaw. Lastly, this improvement opens the possibility of measuring large velocity changes by using MWCS and permafrost monitoring by using ambient noise.« less
Increase in Efficiency of Use of Pedestrian Radiation Portal Monitors
NASA Astrophysics Data System (ADS)
Solovev, D. B.; Merkusheva, A. E.
2017-11-01
Most international airports in the world use radiation portal monitors (RPM) for primary radiation control organization. During the exploitation pedestrian radiation portal monitors operators (in the Russian Federation it is a special subdivision of customs officials) have certain problems related to the search of an ionizing radiation source causing the alarm signal of a radiation monitor. Radiation portal monitors at standard (factory) settings have to find out the illegal moving of the radioisotopes moved by physical persons passing through a controlled zone and having a steady radiation by the gamma or neutron channel. The problem is that recently the number of the ownerships who underwent treatment or medical diagnostics with the use of radio pharmaceuticals considerably increased, i.e,. ownerships represent such an ionizing radiation source. The operator of the radiation portal monitor has to define very quickly whether the ownership is a violator (takes unsolved radioisotopes illegally) or is just a patient of the clinic who underwent treatment/diagnostics with the use of radio pharmaceuticals. The research showing the radioisotopes which are most often used in the medical purposes are given in article, it is offered to use the new software developed by the authors allowing the operator of the radiation portal monitor to define the location of the ownership which has such ionizing radiation source by the activity of radiation similar to the radiation from radio pharmaceuticals.
Gerber, Brian D.; Kendall, William L.
2017-01-01
Monitoring animal populations can be difficult. Limited resources often force monitoring programs to rely on unadjusted or smoothed counts as an index of abundance. Smoothing counts is commonly done using a moving-average estimator to dampen sampling variation. These indices are commonly used to inform management decisions, although their reliability is often unknown. We outline a process to evaluate the biological plausibility of annual changes in population counts and indices from a typical monitoring scenario and compare results with a hierarchical Bayesian time series (HBTS) model. We evaluated spring and fall counts, fall indices, and model-based predictions for the Rocky Mountain population (RMP) of Sandhill Cranes (Antigone canadensis) by integrating juvenile recruitment, harvest, and survival into a stochastic stage-based population model. We used simulation to evaluate population indices from the HBTS model and the commonly used 3-yr moving average estimator. We found counts of the RMP to exhibit biologically unrealistic annual change, while the fall population index was largely biologically realistic. HBTS model predictions suggested that the RMP changed little over 31 yr of monitoring, but the pattern depended on assumptions about the observational process. The HBTS model fall population predictions were biologically plausible if observed crane harvest mortality was compensatory up to natural mortality, as empirical evidence suggests. Simulations indicated that the predicted mean of the HBTS model was generally a more reliable estimate of the true population than population indices derived using a moving 3-yr average estimator. Practitioners could gain considerable advantages from modeling population counts using a hierarchical Bayesian autoregressive approach. Advantages would include: (1) obtaining measures of uncertainty; (2) incorporating direct knowledge of the observational and population processes; (3) accommodating missing years of data; and (4) forecasting population size.
Portegijs, Erja; Rantakokko, Merja; Viljanen, Anne; Rantanen, Taina; Iwarsson, Susanne
We studied whether entrance-related environmental barriers, perceived and objectively recorded, were associated with moving out-of-home daily in older people with and without limitations in lower extremity performance. Cross-sectional analyses of the "Life-space mobility in old age" cohort including 848 community-dwelling 75-90-year-old of central Finland. Participants reported their frequency of moving out-of-home (daily vs. 0-6 times/week) and perceived entrance-related environmental barriers (yes/no). Lower extremity performance was assessed (Short Physical Performance Battery) and categorized as poorer (score 0-9) or good (score 10-12). Environmental barriers at entrances and in exterior surroundings were objectively registered (Housing Enabler screening tool) and divided into tertiles. Logistic regression analyses were adjusted for age, sex, number of chronic diseases, cognitive function, month of assessment, type of neighborhood, and years lived in the current home. At home entrances a median of 6 and in the exterior surroundings 5 environmental barriers were objectively recorded, and 20% of the participants perceived entrance-related barriers. The odds for moving out-of-home less than daily increased when participants perceived entrance-related barrier(s) or when they lived in homes with higher numbers of objectively recorded environmental barriers at entrances. Participants with limitations in lower extremity performance were more susceptible to these environmental barriers. Objectively recorded environmental barriers in the exterior surroundings did not compromise out-of-home mobility. Entrance-related environmental barriers may hinder community-dwelling older people to move out-of-home daily especially when their functional capacity is compromised. Potentially, reducing entrance-related barriers may help to prevent confinement to the home. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Binding Objects to Locations: The Relationship between Object Files and Visual Working Memory
ERIC Educational Resources Information Center
Hollingworth, Andrew; Rasmussen, Ian P.
2010-01-01
The relationship between object files and visual working memory (VWM) was investigated in a new paradigm combining features of traditional VWM experiments (color change detection) and object-file experiments (memory for the properties of moving objects). Object-file theory was found to account for a key component of object-position binding in VWM:…
Method for accurately positioning a device at a desired area of interest
Jones, Gary D.; Houston, Jack E.; Gillen, Kenneth T.
2000-01-01
A method for positioning a first device utilizing a surface having a viewing translation stage, the surface being movable between a first position where the viewing stage is in operational alignment with a first device and a second position where the viewing stage is in operational alignment with a second device. The movable surface is placed in the first position and an image is produced with the first device of an identifiable characteristic of a calibration object on the viewing stage. The moveable surface is then placed in the second position and only the second device is moved until an image of the identifiable characteristic in the second device matches the image from the first device. The calibration object is then replaced on the stage of the surface with a test object, and the viewing translation stage is adjusted until the second device images the area of interest. The surface is then moved to the first position where the test object is scanned with the first device to image the area of interest. An alternative embodiment where the devices move is also disclosed.
Simultaneous 3D-vibration measurement using a single laser beam device
NASA Astrophysics Data System (ADS)
Brecher, Christian; Guralnik, Alexander; Baümler, Stephan
2012-06-01
Today's commercial solutions for vibration measurement and modal analysis are 3D-scanning laser doppler vibrometers, mainly used for open surfaces in the automotive and aerospace industries and the classic three-axial accelerometers in civil engineering, for most industrial applications in manufacturing environments, and particularly for partially closed structures. This paper presents a novel measurement approach using a single laser beam device and optical reflectors to simultaneously perform 3D-dynamic measurement as well as geometry measurement of the investigated object. We show the application of this so called laser tracker for modal testing of structures on a mechanical manufacturing shop floor. A holistic measurement method is developed containing manual reflector placement, semi-automated geometric modeling of investigated objects and fully automated vibration measurement up to 1000 Hz and down to few microns amplitude. Additionally the fast set up dynamic measurement of moving objects using a tracking technique is presented that only uses the device's own functionalities and does neither require a predefined moving path of the target nor an electronic synchronization to the moving object.
Chen, Chunyan; Wang, Jie; Loch, Cheryl L; Ahn, Dongchan; Chen, Zhan
2004-02-04
In this paper, the feasibility of monitoring molecular structures at a moving polymer/liquid interface by sum frequency generation (SFG) vibrational spectroscopy has been demonstrated. N-(2-Aminoethyl)-3-aminopropyltrimethoxysilane (AATM, NH2(CH2)2NH(CH2)3Si(OCH3)3) has been brought into contact with a deuterated poly(methyl methacrylate) (d-PMMA) film, and the interfacial silane structure has been monitored using SFG. Upon initial contact, the SFG spectra can be detected, but as time progresses, the spectral intensity changes and finally disappears. Additional experiments indicate that these silane molecules can diffuse into the polymer film and the detected SFG signals are actually from the moving polymer/silane interface. Our results show that the molecular order of the polymer/silane interface exists during the entire diffusion process and is lost when the silane molecules traverse through the thickness of the d-PMMA film. The loss of the SFG signal is due to the formation of a new disordered substrate/silane interface, which contributes no detectable SFG signal. The kinetics of the diffusion of the silane into the polymer have been deduced from the time-dependent SFG signals detected from the AATM molecules as they diffuse through polymer films of different thickness.
Goldstein, J.N.; Woodward, D.F.; Farag, A.M.
1999-01-01
Spawning migration of adult male chinook salmon Oncorhynchus tshawytscha was monitored by radio telemetry to determine their response to the presence of metals contamination in the South Fork of the Coeur d'Alene River, Idaho. The North Fork of the Coeur d'Alene River is relatively free of metals contamination and was used as a control. In all, 45 chinook salmon were transported from their natal stream, Wolf Lodge Creek, tagged with radio transmitters, and released in the Coeur d'Alene River 2 km downstream of the confluence of the South Fork and the North Fork of the Coeur d'Alene River. Fixed telemetry receivers were used to monitor the upstream movement of the tagged chinook salmon through the confluence area for 3 weeks after release. During this period, general water quality and metals concentrations were monitored in the study area. Of the 23 chinook salmon observed to move upstream from the release site and through the confluence area, the majority (16 fish, 70%) moved up the North Fork, and only 7 fish (30%) moved up the South Fork, where greater metals concentrations were observed. Our results agree with laboratory findings and suggest that natural fish populations will avoid tributaries with high metals contamination.
Moving Base Simulation of an ASTOVL Lift-Fan Aircraft
DOT National Transportation Integrated Search
1995-08-01
Using a generalized simulation model, a moving-base simulation of a lift-fan : short takeoff/vertical landing fighter aircraft was conducted on the Vertical : Motion Simulator at Ames Research Center. Objectives of the experiment were to : (1)assess ...
The embodied dynamics of perceptual causality: a slippery slope?
Amorim, Michel-Ange; Siegler, Isabelle A.; Baurès, Robin; Oliveira, Armando M.
2015-01-01
In Michotte's launching displays, while the launcher (object A) seems to move autonomously, the target (object B) seems to be displaced passively. However, the impression of A actively launching B does not persist beyond a certain distance identified as the “radius of action” of A over B. If the target keeps moving beyond the radius of action, it loses its passivity and seems to move autonomously. Here, we manipulated implied friction by drawing (or not) a surface upon which A and B are traveling, and by varying the inclination of this surface in screen- and earth-centered reference frames. Among 72 participants (n = 52 in Experiment 1; n = 20 in Experiment 2), we show that both physical embodiment of the event (looking straight ahead at a screen displaying the event on a vertical plane vs. looking downwards at the event displayed on a horizontal plane) and contextual information (objects moving along a depicted surface or in isolation) affect interpretation of the event and modulate the radius of action of the launcher. Using classical mechanics equations, we show that representational consistency of friction from radius of action responses emphasizes the embodied nature of frictional force in our cognitive architecture. PMID:25954235
The embodied dynamics of perceptual causality: a slippery slope?
Amorim, Michel-Ange; Siegler, Isabelle A; Baurès, Robin; Oliveira, Armando M
2015-01-01
In Michotte's launching displays, while the launcher (object A) seems to move autonomously, the target (object B) seems to be displaced passively. However, the impression of A actively launching B does not persist beyond a certain distance identified as the "radius of action" of A over B. If the target keeps moving beyond the radius of action, it loses its passivity and seems to move autonomously. Here, we manipulated implied friction by drawing (or not) a surface upon which A and B are traveling, and by varying the inclination of this surface in screen- and earth-centered reference frames. Among 72 participants (n = 52 in Experiment 1; n = 20 in Experiment 2), we show that both physical embodiment of the event (looking straight ahead at a screen displaying the event on a vertical plane vs. looking downwards at the event displayed on a horizontal plane) and contextual information (objects moving along a depicted surface or in isolation) affect interpretation of the event and modulate the radius of action of the launcher. Using classical mechanics equations, we show that representational consistency of friction from radius of action responses emphasizes the embodied nature of frictional force in our cognitive architecture.
Amodal completion of moving objects by pigeons.
Nagasaka, Yasuo; Wasserman, Edward A
2008-01-01
In a series of four experiments, we explored whether pigeons complete partially occluded moving shapes. Four pigeons were trained to discriminate between a complete moving shape and an incomplete moving shape in a two-alternative forced-choice task. In testing, the birds were presented with a partially occluded moving shape. In experiment 1, none of the pigeons appeared to complete the testing stimulus; instead, they appeared to perceive the testing stimulus as incomplete fragments. However, in experiments 2, 3, and 4, three of the birds appeared to complete the partially occluded moving shapes. These rare positive results suggest that motion may facilitate amodal completion by pigeons, perhaps by enhancing the figure - ground segregation process.
NASA Astrophysics Data System (ADS)
Puckett, Andrew W.
2007-08-01
I have compiled the Slow-Moving Object Catalog of Known minor planets and comets ("the SMOCK") by comparing the predicted positions of known bodies with those of sources detected by the Sloan Digital Sky Survey (SDSS) that lack positional counterparts at other survey epochs. For the ~50% of the SDSS footprint that has been imaged only once, I have used the Astrophysical Research Consortium's 3.5-meter telescope to obtain reference images for confirmation of Solar System membership. The SMOCK search effort includes all known objects with orbital semimajor axes a > 4.7 AU, as well as a comparison sample of inherently bright Main Belt asteroids. In fact, objects of all proper motions are included, resulting in substantial overlap with the SDSS Moving Object Catalog (MOC) and providing an important check on the inclusion criteria of both catalogs. The MOC does not contain any correctly-identified known objects with a > 12 AU, and also excludes a number of detections of Main Belt and Trojan asteroids that happen to be moving slowly as they enter or leave retrograde motion. The SMOCK catalog is a publicly-available product of this investigation. Having created this new database, I demonstrate some of its applications. The broad dispersion of color indices for transneptunian objects (TNOs) and Centaurs is confirmed, and their tight correlation in ( g - r ) vs ( r - i ) is explored. Repeat observations for more than 30 of these objects allow me to reject the collisional resurfacing scenario as the primary explanation for this broad variety of colors. Trojans with large orbital inclinations are found to have systematically redder colors than their low-inclination counterparts, but an excess of reddish low-inclination objects at L5 is identified. Next, I confirm that non-Plutino TNOs are redder with increasing perihelion distance, and that this effect is even more pronounced among the Classical TNOs. Finally, I take advantage of the byproducts of my search technique and attempt to recover objects with poorly-known orbits. I have drastically improved the current and future ephemeris uncertainties of 3 Trojan asteroids, and have increased by 20%-450% the observed arcs of 10 additional bodies.
Opportunities to improve monitoring of temporal trends with FIA panel data
Raymond Czaplewski; Michael Thompson
2009-01-01
The Forest Inventory and Analysis (FIA) Program of the Forest Service, Department of Agriculture, is an annual monitoring system for the entire United States. Each year, an independent "panel" of FIA field plots is measured. To improve accuracy, FIA uses the "Moving Average" or "Temporally Indifferent" method to combine estimates from...
[Cognitive impairments accompanying the burnout syndrome - a review].
Riedrich, Karin; Weiss, Elisabeth M; Dalkner, Nina; Reininghaus, Eva; Papousek, Ilona; Schwerdtfeger, Andreas; Lackner, Helmut K; Reininghaus, Bernd
2017-03-01
The rising prevalence of the burnout syndrome has increasingly moved it into the focus of scientific interest. In addition to emotional exhaustion and depersonalization, particularly reduced personal accomplishment has strong societal and economic effects. In recent years reduced personal accomplishment has increasingly been linked to cognitive impairment. However, up to now only a few studies have objectively assessed cognitive deficits in burnout patients. This article gives an overview of 16 studies which examined cognitive abilities in burnout patients. The findings are partly contradictory, probably due to methodical differences. Consensus has emerged concerning impairments of executive functions, i.a. vigilance, and memory updating and monitoring. Multifactorial causation may underlie the cognitive impairments. Targeted longitudinal studies are necessary in order to identify the affected cognitive functions and be able to make causal inferences on links between the burnout syndrome and specific cognitive impairments.
Fermi Gamma-Ray Space Telescope - Science Highlights for the First Two Years on Orbit
NASA Technical Reports Server (NTRS)
Moiseev, Alexander
2011-01-01
Fermi science objectives cover probably everything in high energy astrophysics: How do super massive black holes in Active Galactic Nuclei create powerful jets of material moving at nearly light speed? What are the jets made of? What are the mechanisms that produce Gamma-Ray Burst (GRB) explosions? What is the energy budget? How does the Sun generate high-energy gamma-rays in flares? How do the pulsars operate? How many of them are around and how different are they? What are the unidentified gamma-ray sources found by EGRET? What is the origin of the cosmic rays that pervade the Galaxy? What is the nature of dark matter? Fermi LAT successfully operates on the orbit for more than 2 years and demonstrates excellent performance, which is continuously monitored and calibrated. LAT collected> 100 billion on-orbit triggers
Dynamic Metasurface Aperture as Smart Around-the-Corner Motion Detector.
Del Hougne, Philipp; F Imani, Mohammadreza; Sleasman, Timothy; Gollub, Jonah N; Fink, Mathias; Lerosey, Geoffroy; Smith, David R
2018-04-25
Detecting and analysing motion is a key feature of Smart Homes and the connected sensor vision they embrace. At present, most motion sensors operate in line-of-sight Doppler shift schemes. Here, we propose an alternative approach suitable for indoor environments, which effectively constitute disordered cavities for radio frequency (RF) waves; we exploit the fundamental sensitivity of modes of such cavities to perturbations, caused here by moving objects. We establish experimentally three key features of our proposed system: (i) ability to capture the temporal variations of motion and discern information such as periodicity ("smart"), (ii) non line-of-sight motion detection, and (iii) single-frequency operation. Moreover, we explain theoretically and demonstrate experimentally that the use of dynamic metasurface apertures can substantially enhance the performance of RF motion detection. Potential applications include accurately detecting human presence and monitoring inhabitants' vital signs.
Pullen, Tanya; Bottorff, Joan L.; Sabiston, Catherine M.; Campbell, Kristin L.; Ellard, Susan L.; Gotay, Carolyn; Fitzpatrick, Kayla; Caperchione, Cristina M.
2018-01-01
Abstract Objective Despite the physical and psychological health benefits associated with physical activity (PA) for breast cancer (BC) survivors, up to 70% of female BC survivors are not meeting minimum recommended PA guidelines. The objective of this study was to evaluate acceptability and satisfaction with Project MOVE, an innovative approach to increase PA among BC survivors through the combination of microgrants and financial incentives. Methods A mixed‐methods design was used. Participants were BC survivors and support individuals with a mean age of 58.5 years. At 6‐month follow‐up, participants completed a program evaluation questionnaire (n = 72) and participated in focus groups (n = 52) to explore their experience with Project MOVE. Results Participants reported that they were satisfied with Project MOVE (86.6%) and that the program was appropriate for BC survivors (96.3%). Four main themes emerged from focus groups: (1) acceptability and satisfaction of Project MOVE, detailing the value of the model in developing tailored group‐base PA programs; (2) the importance of Project MOVE leaders, highlighting the value of a leader that was organized and a good communicator; (3) breaking down barriers with Project MOVE, describing how the program helped to address common BC related barriers; and (4) motivation to MOVE, outlining how the microgrants enabled survivors to be active, while the financial incentive motivated them to increase and maintain their PA. Conclusion The findings provide support for the acceptability of Project MOVE as a strategy for increasing PA among BC survivors. PMID:29409128
Orientation Control Method and System for Object in Motion
NASA Technical Reports Server (NTRS)
Whorton, Mark Stephen (Inventor); Redmon, Jr., John W. (Inventor); Cox, Mark D. (Inventor)
2012-01-01
An object in motion has a force applied thereto at a point of application. By moving the point of application such that the distance between the object's center-of-mass and the point of application is changed, the object's orientation can be changed/adjusted.
Saneyoshi, Ayako; Michimata, Chikashi
2009-12-01
Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to a different position on geon A. The Categorical task consisted of the original and the categorically transformed objects. The Coordinate task consisted of the original and the coordinately transformed objects. The original object was presented to the central visual field, followed by a comparison object presented to the right or left visual half-fields (RVF and LVF). The results showed an RVF advantage for the Categorical task and an LVF advantage for the Coordinate task. The possibility that categorical and coordinate spatial processing subsystems would be basic computational elements for between- and within-category object recognition was discussed.
Object permanence and working memory in cats (Felis catus).
Goulet, S; Doré, F Y; Rousseau, R
1994-10-01
Cats (Felis catus) find an object when it is visibly moved behind a succession of screens. However, when the object is moved behind a container and is invisibly transferred from the container to the back of a screen, cats try to find the object at or near the container rather than at the true hiding place. Four experiments were conducted to study search behavior and working memory in visible and invisible displacement tests of object permanence. Experiment 1 compared performance in single and in double visible displacement trials. Experiment 2 analyzed search behavior in invisible displacement tests and in analogs using a transparent container. Experiments 3 and 4 tested predictions made from Experiment 1 and 2 in a new situation of object permanence. Results showed that only the position changes that cats have directly perceived are encoded and activated in working memory, because they are unable to represent or infer invisible movements.
Motor effects from visually induced disorientation in man.
DOT National Transportation Integrated Search
1969-11-01
The problem of disorientation in a moving optical environment was examined. Egocentric disorientation can be experienced by a pilot if the entire visual environment moves relative to his body without a clue of the objective position of the airplane i...
NASA Astrophysics Data System (ADS)
Yu, Guoqiang; Durduran, Turgut; Furuya, D.; Lech, G.; Zhou, Chao; Chance, Britten; Greenberg, J. H.; Yodh, Arjun G.
2003-07-01
Measurement of concentration, oxygenation, and flow characteristics of blood cells can reveal information about tissue metabolism and functional heterogeneity. An improved multifunctional hybrid system has been built on the basis of our previous hybrid instrument that combines two near-infrared diffuse optical techniques to simultaneously monitor the changes of blood flow, total hemoglobin concentration (THC) and blood oxygen saturation (StO2). Diffuse correlation spectroscopy (DCS) monitors blood flow (BF) by measuring the optical phase shifts caused by moving blood cells, while diffuse photon density wave spectroscopy (DPDW) measures tissue absorption and scattering. Higher spatial resolution, higher data acquisition rate and higher dynamic range of the improved system allow us to monitor rapid hemodynamic changes in rat brain and human muscles. We have designed two probes with different source-detector pairs and different separations for the two types of experiments. A unique non-contact probe mounted on the back of a camera, which allows continuous measurements without altering the blood flow, was employed to in vivo monitor the metabolic responses in rat brain during KCl induced cortical spreading depression (CSD). A contact probe was used to measure changes of blood flow and oxygenation in human muscle during and after cuff occlusion or exercise, where the non-contact probe is not appropriate for monitoring the moving target. The experimental results indicate that our multifunctional hybrid system is capable of in vivo and non-invasive monitoring of the hemodynamic changes in different tissues (smaller tissues in rat brain, larger tissues in human muscle) under different conditions (static versus moving). The time series images of flow during CSD obtained by our technique revealed spatial and temporal hemodynamic changes in rat brain. Two to three fold longer recovery times of flow and oxygenation after cuff occlusion or exercise from calf flexors in a patient with peripheral vascular disease (PVD) were found.
Judder-Induced Edge Flicker at Zero Spatial Contrast
NASA Technical Reports Server (NTRS)
Larimer, James; Feng, Christine; Gille, Jennifer; Cheung, Victor
2004-01-01
Judder is a motion artifact that degrades the quality of video imagery. Smooth motion appears jerky and can appear to flicker along the leading and trailing edge of the moving object. In a previous paper, we demonstrated that the strength of the edge flicker signal depended upon the brightness of the scene and the contrast of the moving object relative to the background. Reducing the contrast between foreground and background reduced the flicker signal. In this report, we show that the contrast signal required for judder-induced edge flicker is due to temporal contrast and not simply to spatial contrast. Bars made of random dots of the same dot density as the background exhibit edge flicker when moved at sufficient rate.
Orion EM-1 Crew Module Structural Test Article Move to Birdcage
2016-11-16
Inside the Neil Armstrong Operations and Checkout Building at NASA’s Kennedy Space Center in Florida, Lockheed Martin technicians monitor the progress as a crane moves the Orion crew module structural test article (STA) along the center aisle of the high bay. The STA arrived aboard NASA's Super Guppy aircraft at the Shuttle Landing Facility operated by Space Florida. The test article will be moved to a test tool called the birdcage for further testing. The Orion spacecraft will launch atop NASA’s Space Launch System rocket on EM-1, its first deep space mission, in late 2018.
A mathematical model for computer image tracking.
Legters, G R; Young, T Y
1982-06-01
A mathematical model using an operator formulation for a moving object in a sequence of images is presented. Time-varying translation and rotation operators are derived to describe the motion. A variational estimation algorithm is developed to track the dynamic parameters of the operators. The occlusion problem is alleviated by using a predictive Kalman filter to keep the tracking on course during severe occlusion. The tracking algorithm (variational estimation in conjunction with Kalman filter) is implemented to track moving objects with occasional occlusion in computer-simulated binary images.
Cost considerations for long-term ecological monitoring
Caughlan, L.; Oakley, K.L.
2001-01-01
For an ecological monitoring program to be successful over the long-term, the perceived benefits of the information must justify the cost. Financial limitations will always restrict the scope of a monitoring program, hence the program's focus must be carefully prioritized. Clearly identifying the costs and benefits of a program will assist in this prioritization process, but this is easier said than done. Frequently, the true costs of monitoring are not recognized and are, therefore, underestimated. Benefits are rarely evaluated, because they are difficult to quantify. The intent of this review is to assist the designers and managers of long-term ecological monitoring programs by providing a general framework for building and operating a cost-effective program. Previous considerations of monitoring costs have focused on sampling design optimization. We present cost considerations of monitoring in a broader context. We explore monitoring costs, including both budgetary costs--what dollars are spent on--and economic costs, which include opportunity costs. Often, the largest portion of a monitoring program budget is spent on data collection, and other, critical aspects of the program, such as scientific oversight, training, data management, quality assurance, and reporting, are neglected. Recognizing and budgeting for all program costs is therefore a key factor in a program's longevity. The close relationship between statistical issues and cost is discussed, highlighting the importance of sampling design, replication and power, and comparing the costs of alternative designs through pilot studies and simulation modeling. A monitoring program development process that includes explicit checkpoints for considering costs is presented. The first checkpoint occur during the setting of objectives and during sampling design optimization. The last checkpoint occurs once the basic shape of the program is known, and the costs and benefits, or alternatively the cost-effectiveness, of each program element can be evaluated. Moving into the implementation phase without careful evaluation of costs and benefits is risky because if costs are later found to exceed benefits, the program will fail. The costs of development, which can be quite high, will have been largely wasted. Realistic expectations of costs and benefits will help ensure that monitoring programs survive the early, turbulent stages of development and the challenges posed by fluctuating budgets during implementation.
ERIC Educational Resources Information Center
Flombaum, Jonathan I.; Scholl, Brian J.
2006-01-01
Meaningful visual experience requires computations that identify objects as the same persisting individuals over time, motion, occlusion, and featural change. This article explores these computations in the tunnel effect: When an object moves behind an occluder, and then an object later emerges following a consistent trajectory, observers…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edmunds, D; Donovan, E
Purpose: To determine whether the Microsoft Kinect Version 2 (Kinect v2), a commercial off-the-shelf (COTS) depth sensors designed for entertainment purposes, were robust to the radiotherapy treatment environment and could be suitable for monitoring of voluntary breath-hold compliance. This could complement current visual monitoring techniques, and be useful for heart sparing left breast radiotherapy. Methods: In-house software to control Kinect v2 sensors, and capture output information, was developed using the free Microsoft software development kit, and the Cinder creative coding C++ library. Each sensor was used with a 12m USB 3.0 active cable. A solid water block was used asmore » the object. The depth accuracy and precision of the sensors was evaluated by comparing Kinect reported distance to the object with a precision laser measurement across a distance range of 0.6m to 2.0 m. The object was positioned on a high-precision programmable motion platform and moved in two programmed motion patterns and Kinect reported distance logged. Robustness to the radiation environment was tested by repeating all measurements with a linear accelerator operating over a range of pulse repetition frequencies (6Hz to 400Hz) and dose rates 50 to 1500 monitor units (MU) per minute. Results: The complex, consistent relationship between true and measured distance was unaffected by the radiation environment, as was the ability to detect motion. Sensor precision was < 1 mm and the accuracy between 1.3 mm and 1.8 mm when a distance correction was applied. Both motion patterns were tracked successfully with a root mean squared error (RMSE) of 1.4 and 1.1 mm respectively. Conclusion: Kinect v2 sensors are capable of tracking pre-programmed motion patterns with an accuracy <2 mm and appear robust to the radiotherapy treatment environment. A clinical trial using Kinect v2 sensor for monitoring voluntary breath hold has ethical approval and is open to recruitment. The authors are supported by a National Institute of Health Research (NIHR) Career Development Fellowship (CDF-2013-06-005). Microsoft Corporation donated three sensors. The views expressed in this publication are those of the author(s) and not necessarily those of the NHS, the National Institute for Health Research or the Department of Health.« less
Improved Scanners for Microscopic Hyperspectral Imaging
NASA Technical Reports Server (NTRS)
Mao, Chengye
2009-01-01
Improved scanners to be incorporated into hyperspectral microscope-based imaging systems have been invented. Heretofore, in microscopic imaging, including spectral imaging, it has been customary to either move the specimen relative to the optical assembly that includes the microscope or else move the entire assembly relative to the specimen. It becomes extremely difficult to control such scanning when submicron translation increments are required, because the high magnification of the microscope enlarges all movements in the specimen image on the focal plane. To overcome this difficulty, in a system based on this invention, no attempt would be made to move either the specimen or the optical assembly. Instead, an objective lens would be moved within the assembly so as to cause translation of the image at the focal plane: the effect would be equivalent to scanning in the focal plane. The upper part of the figure depicts a generic proposed microscope-based hyperspectral imaging system incorporating the invention. The optical assembly of this system would include an objective lens (normally, a microscope objective lens) and a charge-coupled-device (CCD) camera. The objective lens would be mounted on a servomotor-driven translation stage, which would be capable of moving the lens in precisely controlled increments, relative to the camera, parallel to the focal-plane scan axis. The output of the CCD camera would be digitized and fed to a frame grabber in a computer. The computer would store the frame-grabber output for subsequent viewing and/or processing of images. The computer would contain a position-control interface board, through which it would control the servomotor. There are several versions of the invention. An essential feature common to all versions is that the stationary optical subassembly containing the camera would also contain a spatial window, at the focal plane of the objective lens, that would pass only a selected portion of the image. In one version, the window would be a slit, the CCD would contain a one-dimensional array of pixels, and the objective lens would be moved along an axis perpendicular to the slit to spatially scan the image of the specimen in pushbroom fashion. The image built up by scanning in this case would be an ordinary (non-spectral) image. In another version, the optics of which are depicted in the lower part of the figure, the spatial window would be a slit, the CCD would contain a two-dimensional array of pixels, the slit image would be refocused onto the CCD by a relay-lens pair consisting of a collimating and a focusing lens, and a prism-gratingprism optical spectrometer would be placed between the collimating and focusing lenses. Consequently, the image on the CCD would be spatially resolved along the slit axis and spectrally resolved along the axis perpendicular to the slit. As in the first-mentioned version, the objective lens would be moved along an axis perpendicular to the slit to spatially scan the image of the specimen in pushbroom fashion.
Transport of europium colloids in vadose zone lysimeters at the semiarid Hanford site.
Liu, Ziru; Flury, Markus; Zhang, Z Fred; Harsh, James B; Gee, Glendon W; Strickland, Chris E; Clayton, Ray E
2013-03-05
The objective of this study was to quantify transport of Eu colloids in the vadose zone at the semiarid Hanford site. Eu-hydroxy-carbonate colloids, Eu(OH)(CO3), were applied to the surface of field lysimeters, and migration of the colloids through the sediments was monitored using wick samplers. The lysimeters were exposed to natural precipitation (145-231 mm/year) or artificial irrigation (124-348 mm/year). Wick outflow was analyzed for Eu concentrations, supplemented by electron microscopy and energy-dispersive X-ray analysis. Small amounts of Eu colloids (<1%) were detected in the deepest wick sampler (2.14 m depth) 2.5 months after application and cumulative precipitation of only 20 mm. We observed rapid transport of Eu colloids under both natural precipitation and artificial irrigation; that is, the leading edge of the Eu colloids moved at a velocity of 3 cm/day within the first 2 months after application. Episodic infiltration (e.g., Chinook snowmelt events) caused peaks of Eu in the wick outflow. While a fraction of Eu moved consistent with long-term recharge estimates at the site, the main mass of Eu remained in the top 30 cm of the sediments. This study illustrates that, under field conditions, near-surface colloid mobilization and transport occurred in Hanford sediments.
Execution of saccadic eye movements affects speed perception
Goettker, Alexander; Braun, Doris I.; Schütz, Alexander C.; Gegenfurtner, Karl R.
2018-01-01
Due to the foveal organization of our visual system we have to constantly move our eyes to gain precise information about our environment. Doing so massively alters the retinal input. This is problematic for the perception of moving objects, because physical motion and retinal motion become decoupled and the brain has to discount the eye movements to recover the speed of moving objects. Two different types of eye movements, pursuit and saccades, are combined for tracking. We investigated how the way we track moving targets can affect the perceived target speed. We found that the execution of corrective saccades during pursuit initiation modifies how fast the target is perceived compared with pure pursuit. When participants executed a forward (catch-up) saccade they perceived the target to be moving faster. When they executed a backward saccade they perceived the target to be moving more slowly. Variations in pursuit velocity without corrective saccades did not affect perceptual judgments. We present a model for these effects, assuming that the eye velocity signal for small corrective saccades gets integrated with the retinal velocity signal during pursuit. In our model, the execution of corrective saccades modulates the integration of these two signals by giving less weight to the retinal information around the time of corrective saccades. PMID:29440494
David W. Williams; Guohong Li; Ruitong Gao
2004-01-01
Movements of 55 Anoplophora glabripennis (Motschulsky) adults were monitored on 200 willow trees, Salix babylonica L., at a site appx. 80 km southeast of Beijing, China, for 9-14 d in an individual mark-recapture study using harmonic radar. The average movement distance was appx. 14 m, with many beetles not moving at all and others moving >90 m. The rate of movement...
Evidence-Based Design Features Improve Sleep Quality Among Psychiatric Inpatients.
Pyrke, Ryan J L; McKinnon, Margaret C; McNeely, Heather E; Ahern, Catherine; Langstaff, Karen L; Bieling, Peter J
2017-10-01
The primary aim of the present study was to compare sleep characteristics pre- and post-move into a state-of-the-art mental health facility, which offered private sleeping quarters. Significant evidence points toward sleep disruption among psychiatric inpatients. It is unclear, however, how environmental factors (e.g., dorm-style rooms) impact sleep quality in this population. To assess sleep quality, a novel objective technology, actigraphy, was used before and after a facility move. Subjective daily interviews were also administered, along with the Horne-Ostberg Morningness-Eveningness Questionnaire and the Pittsburgh Sleep Quality Index. Actigraphy revealed significant improvements in objective sleep quality following the facility move. Interestingly, subjective report of sleep quality did not correlate with the objective measures. Circadian sleep type appeared to play a role in influencing subjective attitudes toward sleep quality. Built environment has a significant effect on the sleep quality of psychiatric inpatients. Given well-documented disruptions in sleep quality present among psychiatric patients undergoing hospitalization, design elements like single patient bedrooms are highly desirable.
Improved segmentation of occluded and adjoining vehicles in traffic surveillance videos
NASA Astrophysics Data System (ADS)
Juneja, Medha; Grover, Priyanka
2013-12-01
Occlusion in image processing refers to concealment of any part of the object or the whole object from view of an observer. Real time videos captured by static cameras on roads often encounter overlapping and hence, occlusion of vehicles. Occlusion in traffic surveillance videos usually occurs when an object which is being tracked is hidden by another object. This makes it difficult for the object detection algorithms to distinguish all the vehicles efficiently. Also morphological operations tend to join the close proximity vehicles resulting in formation of a single bounding box around more than one vehicle. Such problems lead to errors in further video processing, like counting of vehicles in a video. The proposed system brings forward efficient moving object detection and tracking approach to reduce such errors. The paper uses successive frame subtraction technique for detection of moving objects. Further, this paper implements the watershed algorithm to segment the overlapped and adjoining vehicles. The segmentation results have been improved by the use of noise and morphological operations.
Hergovich, Andreas; Gröbl, Kristian; Carbon, Claus-Christian
2011-01-01
Following Gustav Kuhn's inspiring technique of using magicians' acts as a source of insight into cognitive sciences, we used the 'paddle move' for testing the psychophysics of combined movement trajectories. The paddle move is a standard technique in magic consisting of a combined rotating and tilting movement. Careful control of the mutual speed parameters of the two movements makes it possible to inhibit the perception of the rotation, letting the 'magic' effect emerge--a sudden change of the tilted object. By using 3-D animated computer graphics we analysed the interaction of different angular speeds and the object shape/size parameters in evoking this motion disappearance effect. An angular speed of 540 degrees s(-1) (1.5 rev. s(-1)) sufficed to inhibit the perception of the rotary movement with the smallest object showing the strongest effect. 90.7% of the 172 participants were not able to perceive the rotary movement at an angular speed of 1125 degrees s(-1) (3.125 rev. s(-1)). Further analysis by multiple linear regression revealed major influences on the effectiveness of the magic trick of object height and object area, demonstrating the applicability of analysing key factors of magic tricks to reveal limits of the perceptual system.
Open source data logger for low-cost environmental monitoring
2014-01-01
Abstract The increasing transformation of biodiversity into a data-intensive science has seen numerous independent systems linked and aggregated into the current landscape of biodiversity informatics. This paper outlines how we can move forward with this programme, incorporating real time environmental monitoring into our methodology using low-power and low-cost computing platforms. PMID:24855446
NASA Technical Reports Server (NTRS)
Shepherd, C. K.
1989-01-01
Compact transmitters eliminate need for wires to monitors. Biomedical telectrode is small electronic package that attaches to patient in manner similar to small adhesive bandage. Patient wearing biomedical telectrodes moves freely, without risk of breaking or entangling wire connections. Especially beneficial to patients undergoing electrocardiographic monitoring in intensive-care units in hospitals. Eliminates nuisance of coping with wire connections while dressing and going to toilet.
As the lower Saint Louis River moves closer and closer to delisting as an Area of Concern, it is incumbent that we measure, assess and report on our success. Going forward, It’s equally important that we continue monitoring to protect and sustain the healthy ecosystems we&...
Spies, Surveillance and Stakeouts: Monitoring Muslim Moves in British State Schools
ERIC Educational Resources Information Center
Sian, Katy Pal
2015-01-01
This article will provide a critique of the PVE initiative and its implementation within the context of primary education following the events of 9/11, the 2001 riots and 7/7. Drawing upon empirical data I will argue that the monitoring of young Muslims and "extremism" is problematic and reinforces the logics of Islamophobia through…
Landsat 7 ETM+ provides an opportunity to extend the area and frequency with
which we are able to monitor the Earth's surface with fine spatial resolution
data. To take advantage of this opportunity it is necessary to move beyond the
traditional image-by-image approac...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Evan; Goodale, Wing; Burns, Steve
There is a critical need to develop monitoring tools to track aerofauna (birds and bats) in three dimensions around wind turbines. New monitoring systems will reduce permitting uncertainty by increasing the understanding of how birds and bats are interacting with wind turbines, which will improve the accuracy of impact predictions. Biodiversity Research Institute (BRI), The University of Maine Orono School of Computing and Information Science (UMaine SCIS), HiDef Aerial Surveying Limited (HiDef), and SunEdison, Inc. (formerly First Wind) responded to this need by using stereo-optic cameras with near-infrared (nIR) technology to investigate new methods for documenting aerofauna behavior around windmore » turbines. The stereo-optic camera system used two synchronized high-definition video cameras with fisheye lenses and processing software that detected moving objects, which could be identified in post-processing. The stereo- optic imaging system offered the ability to extract 3-D position information from pairs of images captured from different viewpoints. Fisheye lenses allowed for a greater field of view, but required more complex image rectification to contend with fisheye distortion. The ability to obtain 3-D positions provided crucial data on the trajectory (speed and direction) of a target, which, when the technology is fully developed, will provide data on how animals are responding to and interacting with wind turbines. This project was focused on testing the performance of the camera system, improving video review processing time, advancing the 3-D tracking technology, and moving the system from Technology Readiness Level 4 to 5. To achieve these objectives, we determined the size and distance at which aerofauna (particularly eagles) could be detected and identified, created efficient data management systems, improved the video post-processing viewer, and attempted refinement of 3-D modeling with respect to fisheye lenses. The 29-megapixel camera system successfully captured 16,173 five-minute video segments in the field. During nighttime field trials using nIR, we found that bat-sized objects could not be detected more than 60 m from the camera system. This led to a decision to focus research efforts exclusively on daytime monitoring and to redirect resources towards improving the video post- processing viewer. We redesigned the bird event post-processing viewer, which substantially decreased the review time necessary to detect and identify flying objects. During daytime field trials, we determine that eagles could be detected up to 500 m away using the fisheye wide-angle lenses, and eagle-sized targets could be identified to species within 350 m of the camera system. We used distance sampling survey methods to describe the probability of detecting and identifying eagles and other aerofauna as a function of distance from the system. The previously developed 3-D algorithm for object isolation and tracking was tested, but the image rectification (flattening) required to obtain accurate distance measurements with fish-eye lenses was determined to be insufficient for distant eagles. We used MATLAB and OpenCV to improve fisheye lens rectification towards the center of the image, but accurate measurements towards the image corners could not be achieved. We believe that changing the fisheye lens to rectilinear lens would greatly improve position estimation, but doing so would result in a decrease in viewing angle and depth of field. Finally, we generated simplified shape profiles of birds to look for similarities between unknown animals and known species. With further development, this method could provide a mechanism for filtering large numbers of shapes to reduce data storage and processing. These advancements further refined the camera system and brought this new technology closer to market. Once commercialized, the stereo-optic camera system technology could be used to: a) research how different species interact with wind turbines in order to refine collision risk models and inform mitigation solutions; and b) monitor aerofauna interactions with terrestrial and offshore wind farms replacing costly human observers and allowing for long-term monitoring in the offshore environment. The camera system will provide developers and regulators with data on the risk that wind turbines present to aerofauna, which will reduce uncertainty in the environmental permitting process.« less
Schlieren System and method for moving objects
NASA Technical Reports Server (NTRS)
Weinstein, Leonard M. (Inventor)
1995-01-01
A system and method are provided for recording density changes in a flow field surrounding a moving object. A mask having an aperture for regulating the passage of images is placed in front of an image recording medium. An optical system is placed in front of the mask. A transition having a light field-of-view and a dark field-of-view is located beyond the test object. The optical system focuses an image of the transition at the mask such that the aperture causes a band of light to be defined on the image recording medium. The optical system further focuses an image of the object through the aperture of the mask so that the image of the object appears on the image recording medium. Relative motion is minimized between the mask and the transition. Relative motion is also minimized between the image recording medium and the image of the object. In this way, the image of the object and density changes in a flow field surrounding the object are recorded on the image recording medium when the object crosses the transition in front of the optical system.
Vection: the contributions of absolute and relative visual motion.
Howard, I P; Howard, A
1994-01-01
Inspection of a visual scene rotating about the vertical body axis induces a compelling sense of self rotation, or circular vection. Circular vection is suppressed by stationary objects seen beyond the moving display but not by stationary objects in the foreground. We hypothesised that stationary objects in the foreground facilitate vection because they introduce a relative-motion signal into what would otherwise be an absolute-motion signal. Vection latency and magnitude were measured with a full-field moving display and with stationary objects of various sizes and at various positions in the visual field. The results confirmed the hypothesis. Vection latency was longer when there were no stationary objects in view than when stationary objects were in view. The effect of stationary objects was particularly evident at low stimulus velocities. At low velocities a small stationary point significantly increased vection magnitude in spite of the fact that, at higher stimulus velocities and with other stationary objects in view, fixation on a stationary point, if anything, reduced vection. Changing the position of the stationary objects in the field of view did not affect vection latencies or magnitudes.
Source monitoring in Korsakoff's syndrome: "Did I touch the toothbrush or did I imagine doing so?"
El Haj, Mohamad; Nandrino, Jean Louis; Coello, Yann; Miller, Ralph; Antoine, Pascal
2017-06-01
There is a body of research suggesting compromised ability to distinguish between different external sources of information (i.e., external monitoring) in Korsakoff's syndrome. Here we replicate and extend this literature by assessing the ability of patients with Korsakoff's syndrome to distinguish between different external sources of information (i.e., external monitoring), between internal and external sources of information (i.e., reality monitoring), and between different internal sources of information (i.e., internal monitoring). On the external monitoring assessment, patients with Korsakoff's syndrome and controls watched the experimenter place objects (e.g., a toothbrush) in either a black or white box; afterward, they were asked to remember where the objects had been placed. On the reality monitoring assessment, participants had to either place objects or watch the experimenter place objects in a black box; afterward, they were asked to remember whether the objects had been placed in the box by themselves or by the experimenter. On the internal monitoring assessment, participants had to either place objects or imagine themselves placing objects in a black box; afterward, they were asked to remember whether they had previously placed the objects in the box or imagined doing so. Analyses demonstrated lower external and internal monitoring in patients with Korsakoff's syndrome than in controls, but no significant difference was observed between the two populations on the reality monitoring condition. Our data provide preliminary evidence that the ability to recognize oneself as the author of one's own actions may be relatively preserved in Korsakoff's syndrome. Copyright © 2017 Elsevier Ltd. All rights reserved.
To Pass or Not to Pass: Modeling the Movement and Affordance Dynamics of a Pick and Place Task
Lamb, Maurice; Kallen, Rachel W.; Harrison, Steven J.; Di Bernardo, Mario; Minai, Ali; Richardson, Michael J.
2017-01-01
Humans commonly engage in tasks that require or are made more efficient by coordinating with other humans. In this paper we introduce a task dynamics approach for modeling multi-agent interaction and decision making in a pick and place task where an agent must move an object from one location to another and decide whether to act alone or with a partner. Our aims were to identify and model (1) the affordance related dynamics that define an actor's choice to move an object alone or to pass it to their co-actor and (2) the trajectory dynamics of an actor's hand movements when moving to grasp, relocate, or pass the object. Using a virtual reality pick and place task, we demonstrate that both the decision to pass or not pass an object and the movement trajectories of the participants can be characterized in terms of a behavioral dynamics model. Simulations suggest that the proposed behavioral dynamics model exhibits features observed in human participants including hysteresis in decision making, non-straight line trajectories, and non-constant velocity profiles. The proposed model highlights how the same low-dimensional behavioral dynamics can operate to constrain multiple (and often nested) levels of human activity and suggests that knowledge of what, when, where and how to move or act during pick and place behavior may be defined by these low dimensional task dynamics and, thus, can emerge spontaneously and in real-time with little a priori planning. PMID:28701975
Micronutrient Fortification of Food in Southeast Asia: Recommendations from an Expert Workshop
Gayer, Justine; Smith, Geoffry
2015-01-01
Micronutrient deficiencies remain a significant public health issue in Southeast Asia, particularly in vulnerable populations, such as women of reproductive age and young children. An important nutrition-specific intervention to address micronutrient malnutrition is fortification of staple foods and condiments. In October 2013, the International Life Sciences Institute (ILSI) Southeast Asia Region held a workshop on micronutrient fortification of food in Bangkok, Thailand. The objective was to engage multiple stakeholders in a discussion on food fortification and its importance as a public health intervention in Southeast Asia, and to identify and address key challenges/gaps in and potential opportunities for fortification of foods in ASEAN countries. Key challenges that were identified include: “scaling up” and mobilizing sustainable support for fortification programs in the form of multi-stakeholder partnerships, effecting policy change to support mandatory fortification, long-term monitoring of the programs’ compliance and efficacy in light of limited resources, and increasing awareness and uptake of fortified products through social marketing campaigns. Future actions recommended include the development of terms of engagement and governance for multi-stakeholder partnerships, moving towards a sustainable business model and more extensive monitoring, both for effectiveness and efficacy and for enforcement of fortification legislation. PMID:25608937
Preparing images for publication: part 1.
Devigus, Alessandro; Paul, Stefan
2006-04-01
Images play a vital role in the publication and presentation of clinical and scientific work. Within clinical photography, color reproduction has always been a contentious issue. With the development of new technologies, the variables affecting color reproduction have changed, and photographers have moved away from film-based to digital photographic imaging systems. To develop an understanding of color, knowledge about the basic principles of light and vision is important. An object's color is determined by which wavelengths of light it reflects. Colors of light and colors of pigment behave differently. Due to technical limitations, monitors and printers are unable to reproduce all the colors we can see with our eyes, also called the LAB color space. In order to optimize the output of digital clinical images, color management solutions need to be integrated in the photographic workflow; however, their use is still limited in the medical field. As described in part 2 of this article, calibrating your computer monitor and using an 18% gray background card are easy ways to enable more consistent color reproduction for publication. In addition, some basic information about the various camera settings is given to facilitate the use of this new digital equipment in daily practice.
The Reach and Impact of Direct Marketing via Brand Websites of Moist Snuff.
Timberlake, David S; Bruckner, Tim A; Ngo, Vyvian; Nikitin, Dmitriy
2016-04-01
Restricting tobacco marketing is a key element in the US Food and Drug Administration's (FDA) public health framework for regulating tobacco. Given the dearth of empirical data on direct marketing, the objective of this study was to assess the reach and impact of promotions on sales through snuff websites. Nine brands of snuff, representing more than 90% of market share, were monitored for content of coupons, sweepstakes, contests, and other promotions on their respective websites. Monthly sales data and website traffic for the 9 brands, corresponding to the 48-month period of January 2011 through December 2014, were obtained from proprietary sources. A time-series analysis, based on the autoregressive, integrated, moving average (ARIMA) method, was employed for testing the relationships among sales, website visits, and promotions. Website traffic increased substantially during the promotion periods for most brands. Time-series analyses, however, revealed that promotion periods for 5 of 7 brands did not significantly correlate with monthly snuff sales. The success in attracting tobacco consumers to website promotions demonstrates the marketing reach of snuff manufacturers. This form of direct marketing should be monitored by the FDA given evidence of adolescents' exposure to cigarette brand websites.
Monitoring Sand Sheets and Dunes
2017-06-12
NASA's Mars Reconnaissance Orbiter (MRO) captured this crater featuring sand dunes and sand sheets on its floor. What are sand sheets? Snow fall on Earth is a good example of sand sheets: when it snows, the ground gets blanketed with up to a few meters of snow. The snow mantles the ground and "mimics" the underlying topography. Sand sheets likewise mantle the ground as a relatively thin deposit. This kind of environment has been monitored by HiRISE since 2007 to look for movement in the ripples covering the dunes and sheets. This is how scientists who study wind-blown sand can track the amount of sand moving through the area and possibly where the sand came from. Using the present environment is crucial to understanding the past: sand dunes, sheets, and ripples sometimes become preserved as sandstone and contain clues as to how they were deposited The map is projected here at a scale of 25 centimeters (9.8 inches) per pixel. [The original image scale is 25 centimeters (9.8 inches) per pixel (with 1 x 1 binning); objects on the order of 75 centimeters (29.5 inches) across are resolved.] North is up. https://photojournal.jpl.nasa.gov/catalog/PIA21757
An Approach to Extract Moving Objects from Mls Data Using a Volumetric Background Representation
NASA Astrophysics Data System (ADS)
Gehrung, J.; Hebel, M.; Arens, M.; Stilla, U.
2017-05-01
Data recorded by mobile LiDAR systems (MLS) can be used for the generation and refinement of city models or for the automatic detection of long-term changes in the public road space. Since for this task only static structures are of interest, all mobile objects need to be removed. This work presents a straightforward but powerful approach to remove the subclass of moving objects. A probabilistic volumetric representation is utilized to separate MLS measurements recorded by a Velodyne HDL-64E into mobile objects and static background. The method was subjected to a quantitative and a qualitative examination using multiple datasets recorded by a mobile mapping platform. The results show that depending on the chosen octree resolution 87-95% of the measurements are labeled correctly.
NASA Astrophysics Data System (ADS)
Kenworthy, Matthew
2017-04-01
It's not often that an astronomical object gets its own dedicated observatory, but as the planet Beta Pictoris b moves in front of its host star, its every move will be watched by bRing, eager to discover more about the planet's Hill sphere, explains Matthew Kenworthy.
Dynamical evolution of motion perception.
Kanai, Ryota; Sheth, Bhavin R; Shimojo, Shinsuke
2007-03-01
Motion is defined as a sequence of positional changes over time. However, in perception, spatial position and motion dynamically interact with each other. This reciprocal interaction suggests that the perception of a moving object itself may dynamically evolve following the onset of motion. Here, we show evidence that the percept of a moving object systematically changes over time. In experiments, we introduced a transient gap in the motion sequence or a brief change in some feature (e.g., color or shape) of an otherwise smoothly moving target stimulus. Observers were highly sensitive to the gap or transient change if it occurred soon after motion onset (< or =200 ms), but significantly less so if it occurred later (> or = 300 ms). Our findings suggest that the moving stimulus is initially perceived as a time series of discrete potentially isolatable frames; later failures to perceive change suggests that over time, the stimulus begins to be perceived as a single, indivisible gestalt integrated over space as well as time, which could well be the signature of an emergent stable motion percept.
The Role of Visual Working Memory in Attentive Tracking of Unique Objects
ERIC Educational Resources Information Center
Makovski, Tal; Jiang, Yuhong V.
2009-01-01
When tracking moving objects in space humans usually attend to the objects' spatial locations and update this information over time. To what extent do surface features assist attentive tracking? In this study we asked participants to track identical or uniquely colored objects. Tracking was enhanced when objects were unique in color. The benefit…
Monitoring beach changes using GPS surveying techniques
Morton, Robert; Leach, Mark P.; Paine, Jeffrey G.; Cardoza, Michael A.
1993-01-01
The adaptation of Global Positioning System (GPS) surveying techniques to beach monitoring activities is a promising response to this challenge. An experiment that employed both GPS and conventional beach surveying was conducted, and a new beach monitoring method employing kinematic GPS surveys was devised. This new method involves the collection of precise shore-parallel and shore-normal GPS positions from a moving vehicle so that an accurate two-dimensional beach surface can be generated. Results show that the GPS measurements agree with conventional shore-normal surveys at the 1 cm level, and repeated GPS measurements employing the moving vehicle demonstrate a precision of better than 1 cm. In addition, the nearly continuous sampling and increased resolution provided by the GPS surveying technique reveals alongshore changes in beach morphology that are undetected by conventional shore-normal profiles. The application of GPS surveying techniques combined with the refinement of appropriate methods for data collection and analysis provides a better understanding of beach changes, sediment transport, and storm impacts.
Monitoring pulse oximetry via radiotelemetry in freely-moving lambs.
Reix, Philippe; Dumont, Sylvain; Duvareille, Charles; Cyr, Jonathan; Moreau-Bussière, François; Arsenault, Julie; Praud, Jean-Paul
2005-05-12
This study was aimed at validating the use of a custom-made wireless pulse oximeter in freely moving lambs, using radiotelemetry transmission. First, measurements obtained simultaneously using the new, wireless oximeter and a standard commercially-available pulse oximeter (Nonin 8500) were compared in five lambs during 5min episodes of normoxia, hypoxia and hyperoxia. Correlation between the two oximeters for both SpO(2) and heart rate was very good, regardless of oxygenation conditions. Secondly, the capabilities of our device were assessed during more than 45h of polysomnographic recordings in seven lambs. According to the plethysmographic pulse waveform, reliable SpO(2) values were obtained in more than 85% of recording time. Multiple decreases in SpO(2) were readily observed after spontaneous apneas in preterm lambs. It is concluded that our wireless pulse oximeter performs as reliably as a standard pulse oximeter for monitoring SpO(2) variations in lambs, and offers new perspectives for researchers interested in continuous monitoring of oxygenation throughout sleep stages and wakefulness.
Zielinski, Ingar Marie; Steenbergen, Bert; Schmidt, Anna; Klingels, Katrijn; Simon Martinez, Cristina; de Water, Pascal; Hoare, Brian
2018-03-23
To introduce the Windmill-task, a new objective assessment tool to quantify the presence of mirror movements (MMs) in children with unilateral cerebral palsy (UCP), which are typically assessed with the observation-based Woods and Teuber scale (W&T). Prospective, observational, cohort pilot study. Children's hospital. Prospective cohort of children (N=23) with UCP (age range, 6-15y, mean age, 10.5±2.7y). Not applicable. The concurrent validity of the Windmill-task is assessed, and the sensitivity and specificity for MM detection are compared between both assessments. To assess the concurrent validity, Windmill-task data are compared with W&T data using Spearman rank correlations (ρ) for 2 conditions: affected hand moving vs less affected hand moving. Sensitivity and specificity are compared by measuring the mean percentage of children being assessed inconsistently across both assessments. Outcomes of both assessments correlated significantly (affected hand moving: ρ=.520; P=.005; less affected hand moving: ρ=.488; P=.009). However, many children displayed MMs on the Windmill-task, but not on the W&T (sensitivity: affected hand moving: 27.5%; less affected hand moving: 40.6%). Only 2 children displayed MMs on the W&T, but not on the Windmill-task (specificity: affected hand moving: 2.9%; less affected hand moving: 1.4%). The Windmill-task seems to be a valid tool to assess MMs in children with UCP and has an additional advantage of sensitivity to detect MMs. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hosoki, Ai; Nishiyama, Michiko; Choi, Yongwoon; Watanabe, Kazuhiro
2011-05-01
In this paper, we propose discrimination method between a moving human and object by means of a hetero-core fiber smart mat sensor which induces the optical loss change in time. In addition to several advantages such as flexibility, thin size and resistance to electro-magnetic interference for a fiber optic sensor, a hetero-core fiber optic sensor is sensitive to bending action of the sensor portion and independent of temperature fluctuations. Therefore, the hetero-core fiber thin mat sensor can have a fewer sensing portions than the conventional floor pressure sensors, furthermore, can detect the wide area covering the length of strides. The experimental results for human walking tests showed that the mat sensors were reproducibly working in real-time under limiting locations the foot passed in the mat sensor. Focusing on the temporal peak numbers in the optical loss, human walking and wheeled platform moving action induced the peak numbers in the range of 1 - 3 and 5 - 7, respectively, for the 10 persons including 9 male and 1 female. As a result, we conclude that the hetero-core fiber mat sensor is capable of discriminating between the moving human and object such as a wheeled platform focusing on the peak numbers in the temporal optical loss.
Human observers are biased in judging the angular approach of a projectile.
Welchman, Andrew E; Tuck, Val L; Harris, Julie M
2004-01-01
How do we decide whether an object approaching us will hit us? The optic array provides information sufficient for us to determine the approaching trajectory of a projectile. However, when using binocular information, observers report that trajectories near the mid-sagittal plane are wider than they actually are. Here we extend this work to consider stimuli containing additional depth cues. We measure observers' estimates of trajectory direction first for computer rendered, stereoscopically presented, rich-cue objects, and then for real objects moving in the world. We find that, under both rich cue conditions and with real moving objects, observers show positive bias, overestimating the angle of approach when movement is near the mid-sagittal plane. The findings question whether the visual system, using both binocular and monocular cues to depth, can make explicit estimates of the 3-D location and movement of objects in depth.
Finding Kuiper Belt Objects Below the Detection Limit
NASA Astrophysics Data System (ADS)
Whidden, Peter; Kalmbach, Bryce; Bektesevic, Dino; Connolly, Andrew; Jones, Lynne; Smotherman, Hayden; Becker, Andrew
2018-01-01
We demonstrate a novel approach for uncovering the signatures of moving objects (e.g. Kuiper Belt Objects) below the detection thresholds of single astronomical images. To do so, we will employ a matched filter moving at specific rates of proposed orbits through a time-domain dataset. This is analogous to the better-known "shift-and-stack" method; however it uses neither direct shifting nor stacking of the image pixels. Instead of resampling the raw pixels to create an image stack, we will instead integrate the object detection probabilities across multiple single-epoch images to accrue support for a proposed orbit. The filtering kernel provides a measure of the probability that an object is present along a given orbit, and enables the user to make principled decisions about when the search has been successful, and when it may be terminated. The results we present here utilize GPUs to speed up the search by two orders of magnitudes over CPU implementations.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric; Brock, Oliver
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. This paper proposes that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Humanoid Mobile Manipulation Using Controller Refinement
NASA Technical Reports Server (NTRS)
Platt, Robert; Burridge, Robert; Diftler, Myron; Graf, Jodi; Goza, Mike; Huber, Eric
2006-01-01
An important class of mobile manipulation problems are move-to-grasp problems where a mobile robot must navigate to and pick up an object. One of the distinguishing features of this class of tasks is its coarse-to-fine structure. Near the beginning of the task, the robot can only sense the target object coarsely or indirectly and make gross motion toward the object. However, after the robot has located and approached the object, the robot must finely control its grasping contacts using precise visual and haptic feedback. In this paper, it is proposed that move-to-grasp problems are naturally solved by a sequence of controllers that iteratively refines what ultimately becomes the final solution. This paper introduces the notion of a refining sequence of controllers and characterizes this type of solution. The approach is demonstrated in a move-to-grasp task where Robonaut, the NASA/JSC dexterous humanoid, is mounted on a mobile base and navigates to and picks up a geological sample box. In a series of tests, it is shown that a refining sequence of controllers decreases variance in robot configuration relative to the sample box until a successful grasp has been achieved.
Continuous motion scan ptychography: characterization for increased speed in coherent x-ray imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Junjing; Nashed, Youssef S. G.; Chen, Si
2015-01-01
Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object's complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous "fly-scan" mode for ptychographic data collection in whichmore » the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.« less
Continuous motion scan ptychography: characterization for increased speed in coherent x-ray imaging.
Deng, Junjing; Nashed, Youssef S G; Chen, Si; Phillips, Nicholas W; Peterka, Tom; Ross, Rob; Vogt, Stefan; Jacobsen, Chris; Vine, David J
2015-03-09
Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object's complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous "fly-scan" mode for ptychographic data collection in which the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.
The Deep Lens Survey : Real--time Optical Transient and Moving Object Detection
NASA Astrophysics Data System (ADS)
Becker, Andy; Wittman, David; Stubbs, Chris; Dell'Antonio, Ian; Loomba, Dinesh; Schommer, Robert; Tyson, J. Anthony; Margoniner, Vera; DLS Collaboration
2001-12-01
We report on the real-time optical transient program of the Deep Lens Survey (DLS). Meeting the DLS core science weak-lensing objective requires repeated visits to the same part of the sky, 20 visits for 63 sub-fields in 4 filters, on a 4-m telescope. These data are reduced in real-time, and differenced against each other on all available timescales. Our observing strategy is optimized to allow sensitivity to transients on several minute, one day, one month, and one year timescales. The depth of the survey allows us to detect and classify both moving and stationary transients down to ~ 25th magnitude, a relatively unconstrained region of astronomical variability space. All transients and moving objects, including asteroids, Kuiper belt (or trans-Neptunian) objects, variable stars, supernovae, 'unknown' bursts with no apparent host, orphan gamma-ray burst afterglows, as well as airplanes, are posted on the web in real-time for use by the community. We emphasize our sensitivity to detect and respond in real-time to orphan afterglows of gamma-ray bursts, and present one candidate orphan in the field of Abell 1836. See http://dls.bell-labs.com/transients.html.
Geometric Theory of Moving Grid Wavefront Sensor
1977-06-30
Identify by block numbot) Adaptive Optics WaVefront Sensor Geometric Optics Analysis Moving Ronchi Grid "ABSTRACT (Continue an revere sdde If nooessaY...ad Identify by block nucber)A geometric optics analysis is made for a wavefront sensor that uses a moving Ronchi grid. It is shown that by simple data... optical systems being considered or being developed -3 for imaging an object through a turbulent atmosphere. Some of these use a wavefront sensor to
Some recent developments of the immersed interface method for flow simulation
NASA Astrophysics Data System (ADS)
Xu, Sheng
2017-11-01
The immersed interface method is a general methodology for solving PDEs subject to interfaces. In this talk, I will give an overview of some recent developments of the method toward the enhancement of its robustness for flow simulation. In particular, I will present with numerical results how to capture boundary conditions on immersed rigid objects, how to adopt interface triangulation in the method, and how to parallelize the method for flow with moving objects. With these developments, the immersed interface method can achieve accurate and efficient simulation of a flow involving multiple moving complex objects. Thanks to NSF for the support of this work under Grant NSF DMS 1320317.
Detecting multiple moving objects in crowded environments with coherent motion regions
Cheriyadat, Anil M.; Radke, Richard J.
2013-06-11
Coherent motion regions extend in time as well as space, enforcing consistency in detected objects over long time periods and making the algorithm robust to noisy or short point tracks. As a result of enforcing the constraint that selected coherent motion regions contain disjoint sets of tracks defined in a three-dimensional space including a time dimension. An algorithm operates directly on raw, unconditioned low-level feature point tracks, and minimizes a global measure of the coherent motion regions. At least one discrete moving object is identified in a time series of video images based on the trajectory similarity factors, which is a measure of a maximum distance between a pair of feature point tracks.
Orbital Evolution of Jupiter-family Comets
NASA Astrophysics Data System (ADS)
Ipatov, S. I.; Mather, J. C.
2004-05-01
The orbital evolution of more than 25,000 Jupiter-family comets (JFCs) under the gravitational influence of planets was studied. After 40 Myr one considered object (with initial orbit close to that of Comet 88P) got aphelion distance Q<3.5 AU, and it moved in orbits with semi-major axis a=2.60-2.61 AU, perihelion distance 1.71.4 AU, Q<2.6 AU, e=0.2-0.3, and i=9-33 deg for 8 Myr (and it had Q<3 AU for 100 Myr). So JFCs can rarely get typical asteroid orbits and move in them for Myrs. In our opinion, it can be possible that Comet 133P (Elst--Pizarro) moving in a typical asteroidal orbit was earlier a JFC and it circulated its orbit also due to non-gravitational forces. JFCs got near-Earth object (NEO) orbits more often than typical asteroidal orbits. A few JFCs got Earth-crossing orbits with a<2 AU and Q<4.2 AU and moved in such orbits for more than 1 Myr (up to tens or even hundreds of Myrs). Three considered former JFCs even got inner-Earth orbits (with Q<0.983 AU) or Aten orbits for Myrs. The probability of a collision of one of such objects, which move for millions of years inside Jupiter's orbit, with a terrestrial planet can be greater than analogous total probability for thousands other objects. Results obtained by the Bulirsch-Stoer method and by a symplectic method were mainly similar (except for probabilities of close encounters with the Sun when they were high). Our results show that the trans-Neptunian belt can provide a significant portion of NEOs, or the number of trans-Neptunian objects migrating inside solar system could be smaller than it was earlier considered, or most of 1-km former trans-Neptunian objects that had got NEO orbits disintegrated into mini-comets and dust during a smaller part of their dynamical lifetimes if these lifetimes are not small. The obtained results show that during the accumulation of the giant planets the total mass of icy bodies delivered to the Earth could be about the mass of water in Earth's oceans. Several our papers on this problem were put in http://arXiv.org/format/astro-ph/ (e.g., 0305519, 0308448). This work was supported by NASA (NAG5-10776) and INTAS (00-240).
Movements of florida apple snails in relation to water levels and drying events
Darby, P.C.; Bennetts, R.E.; Miller, S.J.; Percival, H.F.
2002-01-01
Florida apple snails (Pomacea Paludosa) apparently have only a limited tolerance to wetland drying events (although little direct evidence exists), but their populations routinely face dry downs under natural and managed water regimes. In this paper, we address speculation that apple snails respond to decreasing water levels and potential drying events by moving toward refugia that remain inundated. We monitored the movements of apple snails in central Florida, USA during drying events at the Blue Cypress Marsh (BC) and at Lake Kissimmee (LK). We monitored the weekly movements of 47 BC snails and 31 LK snails using radio-telemetry. Snails tended to stop moving when water depths were 10 cm. Snails moved along the greatest positive depth gradient (i.e., towards deeper water) when they encountered water depths between 10 and 20 cm. Snails tended to move toward shallower water in water depths ???50 cm, suggesting that snails were avoiding deep water areas such as canals and sloughs. Of the 11 BC snails originally located in the area that eventually went dry, three (27%) were found in deep water refugia by the end of the study. Only one of the 31 LK snails escaped the drying event by moving to deeper water. Our results indicate that some snails may opportunistically escape drying events through movement. The tendency to move toward deeper water was statistically significant and indicates that this behavioral trait might enhance survival when the spatial extent of a dry down is limited. However, as water level falls below 10 cm, snails stop moving and become stranded. As the spatial extent of a dry down increases, we predict that the number of snails stranded would increase proportionally. Stranded Pomacea paludosa must contend with dry marsh conditions, possibly by aestivation. Little more than anecdotal information has been published on P. paludosa aestivation, but it is a common adaptation among other apple snails (Caenogastropoda: Ampullaridae). ?? 2002, The Society of Wetland Scientists.
Moving spray-plate center-pivot sprinkler rating index for assessing runoff potential
USDA-ARS?s Scientific Manuscript database
Numerous moving spray-plate center-pivot sprinklers are commercially available providing a range of drop size distributions and wetted diameters. A means to quantitatively compare sprinkler choices in regards to maximizing infiltration and minimizing runoff is currently lacking. The objective of thi...
Dynamical friction for supersonic motion in a homogeneous gaseous medium
NASA Astrophysics Data System (ADS)
Thun, Daniel; Kuiper, Rolf; Schmidt, Franziska; Kley, Wilhelm
2016-05-01
Context. The supersonic motion of gravitating objects through a gaseous ambient medium constitutes a classical problem in theoretical astrophysics. Its application covers a broad range of objects and scales from planetesimals, planets, and all kind of stars up to galaxies and black holes. In particular, the dynamical friction caused by the wake that forms behind the object plays an important role for the dynamics of the system. To calculate the dynamical friction for a particular system, standard formulae based on linear theory are often used. Aims: It is our goal to check the general validity of these formulae and provide suitable expressions for the dynamical friction acting on the moving object, based on the basic physical parameters of the problem: first, the mass, radius, and velocity of the perturber; second, the gas mass density, soundspeed, and adiabatic index of the gaseous medium; and finally, the size of the forming wake. Methods: We perform dedicated sequences of high-resolution numerical studies of rigid bodies moving supersonically through a homogeneous ambient medium and calculate the total drag acting on the object, which is the sum of gravitational and hydrodynamical drag. We study cases without gravity with purely hydrodynamical drag, as well as gravitating objects. In various numerical experiments, we determine the drag force acting on the moving body and its dependence on the basic physical parameters of the problem, as given above. From the final equilibrium state of the simulations, for gravitating objects we compute the dynamical friction by direct numerical integration of the gravitational pull acting on the embedded object. Results: The numerical experiments confirm the known scaling laws for the dependence of the dynamical friction on the basic physical parameters as derived in earlier semi-analytical studies. As a new important result we find that the shock's stand-off distance is revealed as the minimum spatial interaction scale of dynamical friction. Below this radius, the gas settles into a hydrostatic state, which - owing to its spherical symmetry - causes no net gravitational pull onto the moving body. Finally, we derive an analytic estimate for the stand-off distance that can easily be used when calculating the dynamical friction force.
CROSS-SHORE TRANSPORT OF BIMODAL SANDS.
Richmond, Bruce M.; Sallenger,, Asbury H.; Edge, Billy L.
1985-01-01
Foreshore sediment level and sediment size were monitored as part of an extensive nearshore processes experiment - DUCK 82. Changes in foreshore texture were compared with computed values of onshore transported material based on current measurements from the surf zone and sediment transport theory. Preliminary results indicate reasonable agreement between predicted size of sediment transported onshore and beach texture changes. It is also demonstrated that coarse sediment may move onshore while finer material may simultaneously move offshore. Refs.
The response of Rana muscosa, the mountain yellow-legged frog, to short distance translocations.
K. R. Matthews
2003-01-01
ABSTRACT.âTo determine the response of Mountain Yellow-Legged Frogs to short distance translocations, I placed transmitters on 20 adult frogs and moved them short distances from 144â630 m and monitored their responses for up to 30 days. Of the 20 translocated frogs, seven frogs returned to their original capture site, four frogs moved in the direction of their capture...
Debris-flow initiation from large, slow-moving landslides
Reid, M.E.; Brien, D.L.; LaHusen, R.G.; Roering, J.J.; de la Fuente, J.; Ellen, S.D.; ,
2003-01-01
In some mountainous terrain, debris flows preferentially initiate from the toes and margins of larger, deeper, slower-moving landslides. During the wet winter of 1997, we began real-time monitoring of the large, active Cleveland Corral landslide complex in California, USA. When the main slide is actively moving, small, shallow, first-time slides on the toe and margins mobilize into debris flows and travel down adjacent gullies. We monitored the acceleration of one such failure; changes in velocity provided precursory indications of rapid failure. Three factors appear to aid the initiation of debris flows at this site: 1) locally steepened ground created by dynamic landslide movement, 2) elevated pore-water pressures and abundant soil moisture, and 3) locally cracked and dilated materials. This association between debris flows and large landslides can be widespread in some terrain. Detailed photographic mapping in two watersheds of northwestern California illustrates that the areal density of debris-flow source landsliding is about 3 to 7 times greater in steep geomorphically fresher landslide deposits than in steep ground outside landslide deposits. ?? 2003 Millpress.
Assessment of SRS ambient air monitoring network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, K.; Jannik, T.
Three methodologies have been used to assess the effectiveness of the existing ambient air monitoring system in place at the Savannah River Site in Aiken, SC. Effectiveness was measured using two metrics that have been utilized in previous quantification of air-monitoring network performance; frequency of detection (a measurement of how frequently a minimum number of samplers within the network detect an event), and network intensity (a measurement of how consistent each sampler within the network is at detecting events). In addition to determining the effectiveness of the current system, the objective of performing this assessment was to determine what, ifmore » any, changes could make the system more effective. Methodologies included 1) the Waite method of determining sampler distribution, 2) the CAP88- PC annual dose model, and 3) a puff/plume transport model used to predict air concentrations at sampler locations. Data collected from air samplers at SRS in 2015 compared with predicted data resulting from the methodologies determined that the frequency of detection for the current system is 79.2% with sampler efficiencies ranging from 5% to 45%, and a mean network intensity of 21.5%. One of the air monitoring stations had an efficiency of less than 10%, and detected releases during just one sampling period of the entire year, adding little to the overall network intensity. By moving or removing this sampler, the mean network intensity increased to about 23%. Further work in increasing the network intensity and simulating accident scenarios to further test the ambient air system at SRS is planned« less
Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J
2017-04-01
Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.
A serendipitous all sky survey for bright objects in the outer solar system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, M. E.; Drake, A. J.; Djorgovski, S. G.
2015-02-01
We use seven year's worth of observations from the Catalina Sky Survey and the Siding Spring Survey covering most of the northern and southern hemisphere at galactic latitudes higher than 20° to search for serendipitously imaged moving objects in the outer solar system. These slowly moving objects would appear as stationary transients in these fast cadence asteroids surveys, so we develop methods to discover objects in the outer solar system using individual observations spaced by months, rather than spaced by hours, as is typically done. While we independently discover eight known bright objects in the outer solar system, the faintestmore » having V=19.8±0.1, no new objects are discovered. We find that the survey is nearly 100% efficient at detecting objects beyond 25 AU for V≲19.1 (V≲18.6 in the southern hemisphere) and that the probability that there is one or more remaining outer solar system object of this brightness left to be discovered in the unsurveyed regions of the galactic plane is approximately 32%.« less
NASA Technical Reports Server (NTRS)
Redmon, Jr., John W. (Inventor); McQueen, Donald H. (Inventor); Sanders, Fred G. (Inventor)
1990-01-01
A hand hold device (A) includes a housing (10) having a hand hold (14) and clamping brackets (32,34) for grasping and handling an object. A drive includes drive lever (23), spur gear (22), and rack gears (24,26) carried on rods (24a, 26a) for moving the clamping brackets. A lock includes ratchet gear (40) and pawl (42) biased between lock and unlock positions by a cantilever spring (46,48) and moved by handle (54). Compliant grip pads (32b, 34b) provide compliance to lock, unlock, and hold an object between the clamp brackets.
1991-02-01
lines; and edge busyness , wherein the position of the edge appears to be moving when there is a rapid signal change . E - 3 APPENDIX Fl T|QI. 5/88-070...Some of the most important new and changed factors are as follows: o Motion must be introduced as a most important feature. o Motion artifacts must be...nominal audio level (measured to ground). edge busyness : The deterioration of motion video such that the outlines of moving objects are displayed with
Parallel Flux Tensor Analysis for Efficient Moving Object Detection
2011-07-01
computing as well as parallelization to enable real time performance in analyzing complex video [3, 4 ]. There are a number of challenging computer vision... 4 . TITLE AND SUBTITLE Parallel Flux Tensor Analysis for Efficient Moving Object Detection 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...We use the trace of the flux tensor matrix, referred to as Tr JF , that is defined below, Tr JF = ∫ Ω W (x− y)(I2xt(y) + I2yt(y) + I2tt(y))dy ( 4 ) as
Visual control of prey-capture flight in dragonflies.
Olberg, Robert M
2012-04-01
Interacting with a moving object poses a computational problem for an animal's nervous system. This problem has been elegantly solved by the dragonfly, a formidable visual predator on flying insects. The dragonfly computes an interception flight trajectory and steers to maintain it during its prey-pursuit flight. This review summarizes current knowledge about pursuit behavior and neurons thought to control interception in the dragonfly. When understood, this system has the potential for explaining how a small group of neurons can control complex interactions with moving objects. Copyright © 2011 Elsevier Ltd. All rights reserved.
Optical polarimetry and photometry of X-ray selected BL Lacertae objects
NASA Technical Reports Server (NTRS)
Jannuzi, Buell T.; Smith, Paul S.; Elston, Richard
1993-01-01
We present the data from 3 years of monitoring the optical polarization and apparent brightness of 37 X-ray-selected BL Lacertae objects. The monitored objects include a complete sample drawn from the Einstein Extended Medium Sensitivity Survey. We confirm the BL Lac identifications for 15 of these 22 objects. We include descriptions of the objects and samples in our monitoring program and of the existing complete samples of BL Lac objects, highly polarized quasars, optically violent variable quasars, and blazars.
Relating to monitoring ion sources
Orr, Christopher Henry; Luff, Craig Janson; Dockray, Thomas; Macarthur, Duncan Whittemore; Bounds, John Alan
2002-01-01
The apparatus and method provide techniques for monitoring the position on alpha contamination in or on items or locations. The technique is particularly applicable to pipes, conduits and other locations to which access is difficult. The technique uses indirect monitoring of alpha emissions by detecting ions generated by the alpha emissions. The medium containing the ions is moved in a controlled manner frog in proximity with the item or location to the detecting unit and the signals achieved over time are used to generate alpha source position information.
9 CFR 54.8 - Requirements for flock plans and post-exposure management and monitoring plans.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 9 Animals and Animal Products 1 2011-01-01 2011-01-01 false Requirements for flock plans and post... and post-exposure management and monitoring plans. (a) The owner of the flock or his or her agent must...: Utilization of a live-animal screening test; restrictions on the animals that may be moved from the flock...
9 CFR 54.8 - Requirements for flock plans and post-exposure management and monitoring plans.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 9 Animals and Animal Products 1 2013-01-01 2013-01-01 false Requirements for flock plans and post... and post-exposure management and monitoring plans. (a) The owner of the flock or his or her agent must...: Utilization of a live-animal screening test; restrictions on the animals that may be moved from the flock...
9 CFR 54.8 - Requirements for flock plans and post-exposure management and monitoring plans.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Requirements for flock plans and post... and post-exposure management and monitoring plans. (a) The owner of the flock or his or her agent must...: Utilization of a live-animal screening test; restrictions on the animals that may be moved from the flock...
9 CFR 54.8 - Requirements for flock plans and post-exposure management and monitoring plans.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 9 Animals and Animal Products 1 2012-01-01 2012-01-01 false Requirements for flock plans and post... and post-exposure management and monitoring plans. (a) The owner of the flock or his or her agent must...: Utilization of a live-animal screening test; restrictions on the animals that may be moved from the flock...
9 CFR 54.8 - Requirements for flock plans and post-exposure management and monitoring plans.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 9 Animals and Animal Products 1 2014-01-01 2014-01-01 false Requirements for flock plans and post... and post-exposure management and monitoring plans. (a) The owner of the flock or his or her agent must...: Utilization of a live-animal screening test; restrictions on the animals that may be moved from the flock...
Grouping and trajectory storage in multiple object tracking: impairments due to common item motions.
Suganuma, Mutsumi; Yokosawa, Kazuhiko
2006-01-01
In our natural viewing, we notice that objects change their locations across space and time. However, there has been relatively little consideration of the role of motion information in the construction and maintenance of object representations. We investigated this question in the context of the multiple object tracking (MOT) paradigm, wherein observers must keep track of target objects as they move randomly amid featurally identical distractors. In three experiments, we observed impairments in tracking ability when the motions of the target and distractor items shared particular properties. Specifically, we observed impairments when the target and distractor items were in a chasing relationship or moved in a uniform direction. Surprisingly, tracking ability was impaired by these manipulations even when observers failed to notice them. Our results suggest that differentiable trajectory information is an important factor in successful performance of MOT tasks. More generally, these results suggest that various types of common motion can serve as cues to form more global object representations even in the absence of other grouping cues.
Murray-Moraleda, Jessica R.; Lohman, Rowena
2010-01-01
The Southern California Earthquake Center (SCEC) is a community of researchers at institutions worldwide working to improve understanding of earthquakes and mitigate earthquake risk. One of SCEC's priority objectives is to “develop a geodetic network processing system that will detect anomalous strain transients.” Given the growing number of continuously recording geodetic networks consisting of hundreds of stations, an automated means for systematically searching data for transient signals, especially in near real time, is critical for network operations, hazard monitoring, and event response. The SCEC Transient Detection Test Exercise began in 2008 to foster an active community of researchers working on this problem, explore promising methods, and combine effective approaches in novel ways. A workshop was held in California to assess what has been learned thus far and discuss areas of focus as the project moves forward.
2014-05-21
CAPE CANAVERAL, Fla. – Competition judges monitor the progress of a robot digging in the simulated Martian soil in the Caterpillar Mining Arena during NASA’s 2014 Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Kim Shiflett
2014-05-22
CAPE CANAVERAL, Fla. – Competition judges monitor two team's robots digging in the simulated Martian soil in the Caterpillar Mining Arena during NASA’s 2014 Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from colleges and universities around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Kim Shiflett
2014-05-21
CAPE CANAVERAL, Fla. – Competition judges monitor the progress of a robot digging in the simulated Martian soil in the Caterpillar Mining Arena during NASA’s 2014 Robotic Mining Competition at the Kennedy Space Center Visitor Complex in Florida. More than 35 teams from colleges and universities around the U.S. have designed and built remote-controlled robots for the mining competition. The competition is a NASA Human Exploration and Operations Mission Directorate project designed to engage and retain students in science, technology, engineering and mathematics, or STEM, fields by expanding opportunities for student research and design. Teams use their remote-controlled robotics to maneuver and dig in a supersized sandbox filled with a crushed material that has characteristics similar to Martian soil. The objective of the challenge is to see which team’s robot can collect and move the most regolith within a specified amount of time. For more information, visit www.nasa.gov/nasarmc. Photo credit: NASA/Kim Shiflett
Emerging Infections Program Efforts to Address Health Equity
Vugia, Duc J.; Bennett, Nancy M.; Moore, Matthew R.
2015-01-01
The Emerging Infections Program (EIP), a collaboration between (currently) 10 state health departments, their academic center partners, and the Centers for Disease Control and Prevention, was established in 1995. The EIP performs active, population-based surveillance for important infectious diseases, addresses new problems as they arise, emphasizes projects that lead to prevention, and develops and evaluates public health practices. The EIP has increasingly addressed the health equity challenges posed by Healthy People 2020. These challenges include objectives to increase the proportion of Healthy People–specified conditions for which national data are available by race/ethnicity and socioeconomic status as a step toward first recognizing and subsequently eliminating health inequities. EIP has made substantial progress in moving from an initial focus on monitoring social determinants exclusively through collecting and analyzing data by race/ethnicity to identifying and piloting ways to conduct population-based surveillance by using area-based socioeconomic status measures. PMID:26291875
Holographic digital microscopy in on-line process control
NASA Astrophysics Data System (ADS)
Osanlou, Ardeshir
2011-09-01
This article investigates the feasibility of real-time three-dimensional imaging of microscopic objects within various emulsions while being produced in specialized production vessels. The study is particularly relevant to on-line process monitoring and control in chemical, pharmaceutical, food, cleaning, and personal hygiene industries. Such processes are often dynamic and the materials cannot be measured once removed from the production vessel. The technique reported here is applicable to three-dimensional characterization analyses on stirred fluids in small reaction vessels. Relatively expensive pulsed lasers have been avoided through the careful control of the speed of the moving fluid in relation to the speed of the camera exposure and the wavelength of the continuous wave laser used. The ultimate aim of the project is to introduce a fully robust and compact digital holographic microscope as a process control tool in a full size specialized production vessel.
NASA Astrophysics Data System (ADS)
Goncharenko, Igor; Rostovtseva, Vera; Konovalov, Boris
2017-04-01
For monitoring of the ecological state of coastal waters it is often necessary to obtain data from board a moving ship or an airborne craft. We suggested using a three-channel passive optical device that enables to get the sea reflectance coefficient spectra from board a moving ship. The data of the measurements are processed then according to our original method, which is based on the intrinsic properties of the pure water absorption spectrum - water absorption step method (WASM). It gives us the possibility to suppress influence of the various weather and experiment conditions on the data quality and to obtain estimates of the absorption spectra of the sea waters under exploration. The retrieved spectra in its turn can be the source of information about water constituents concentration. Based on foregoing we developed a semiautomatic measurement complex EMMA (Ecological Monitoring of Marine Aquatories) operating from board a ship. It includes three hyperspectral photometers, the data from which are processed by special algorithm on base of WASM. In natural waters we can get estimates of phytoplankton pigments, "yellow substance" and suspended matter concentrations. EMMA is also provided by the flowing system of temperature and salinity measuring. The main results are the following: • The data from the new semiautomatic complex EMMA obtained during the operational monitoring of coastal waters aboard a moving vessel are given for two different regions of the Black Sea: the region at a river mouth at Adler and the region of two seas waters mixing at Feodosia. • Specially designed for the complex software based on the original algorithm for spectra calibration WASM, which can reduce the negative impact of adverse weather conditions (wind, cloudiness, sea roughness) on the results of evaluation of the composition of sea water (the concentration of particulate matter and DOM), is applied for the data processing. • Complex EMMA is used for rapid determination of distribution of the main components of the coastal waters from board a moving vessel. The obtained water constituents concentrations are compared to the results of measurements in water samples. The developed method of operative sea monitoring is necessary for a variety of purposes, including calibration of satellite measurements.
ERIC Educational Resources Information Center
2001
This teacher's resource packet includes a number of items designed to support teachers in the classroom before and after visiting Mervyn's Moving Mission. The packet includes eight sections: (1) welcome letter in English and Spanish; (2) summary timeline of California mission events in English and Spanish; (3) objectives and curriculum links; (4)…
NASA Astrophysics Data System (ADS)
Bykov, O. P.
Any CCD frames with stars or galaxies or clusters and other images must be studied for a searching of moving celestial objects, namely asteroids, comets, artificial Earth satellites inside them. At Pulkovo Astronomical Observatory, new methods and software were elaborated to solve this problem.
Velocity and Structure Estimation of a Moving Object Using a Moving Monocular Camera
2006-01-01
map the Euclidean position of static landmarks or visual features in the environment . Recent applications of this technique include aerial...From Motion in a Piecewise Planar Environment ,” International Journal of Pattern Recognition and Artificial Intelligence, Vol. 2, No. 3, pp. 485-508...1988. [9] J. M. Ferryman, S. J. Maybank , and A. D. Worrall, “Visual Surveil- lance for Moving Vehicles,” Intl. Journal of Computer Vision, Vol. 37, No
Microcomputer keeps watch at Emerald Mine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-04-01
This paper reviews the computerized mine monitoring system set up at the Emerald Mine, SW Pennsylvania, USA. This coal mine has pioneered the automation of many production and safety features and this article covers their work in fire detection and conveyor belt monitoring. A central computer control room can safely watch over the whole underground mining operation using one 25 inch colour monitor. These new data-acquisition systems will lead the way, in the future, to safer move efficient coal mining. Multi-point monitoring of carbon monoxide, heat anomalies, toxic gases and the procedures in conveyor belt operation from start-up to closedown.
NASA Astrophysics Data System (ADS)
Gauthier, L. R.; Jansen, M. E.; Meyer, J. R.
2014-09-01
Camera motion is a potential problem when a video camera is used to perform dynamic displacement measurements. If the scene camera moves at the wrong time, the apparent motion of the object under study can easily be confused with the real motion of the object. In some cases, it is practically impossible to prevent camera motion, as for instance, when a camera is used outdoors in windy conditions. A method to address this challenge is described that provides an objective means to measure the displacement of an object of interest in the scene, even when the camera itself is moving in an unpredictable fashion at the same time. The main idea is to synchronously measure the motion of the camera and to use those data ex post facto to subtract out the apparent motion in the scene that is caused by the camera motion. The motion of the scene camera is measured by using a reference camera that is rigidly attached to the scene camera and oriented towards a stationary reference object. For instance, this reference object may be on the ground, which is known to be stationary. It is necessary to calibrate the reference camera by simultaneously measuring the scene images and the reference images at times when it is known that the scene object is stationary and the camera is moving. These data are used to map camera movement data to apparent scene movement data in pixel space and subsequently used to remove the camera movement from the scene measurements.
Normal aging delays and compromises early multifocal visual attention during object tracking.
Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman
2013-02-01
Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.
Seasonal and ontogenetic changes in movement patterns of sixgill sharks.
Andrews, Kelly S; Williams, Greg D; Levin, Phillip S
2010-09-08
Understanding movement patterns is fundamental to population and conservation biology. The way an animal moves through its environment influences the dynamics of local populations and will determine how susceptible it is to natural or anthropogenic perturbations. It is of particular interest to understand the patterns of movement for species which are susceptible to human activities (e.g. fishing), or that exert a large influence on community structure, such as sharks. We monitored the patterns of movement of 34 sixgill sharks Hexanchus griseus using two large-scale acoustic arrays inside and outside Puget Sound, Washington, USA. Sixgill sharks were residents in Puget Sound for up to at least four years before making large movements out of the estuary. Within Puget Sound, sixgills inhabited sites for several weeks at a time and returned to the same sites annually. Across four years, sixgills had consistent seasonal movements in which they moved to the north from winter to spring and moved to the south from summer to fall. Just prior to leaving Puget Sound, sixgills altered their behavior and moved twice as fast among sites. Nineteen of the thirty-four sixgills were detected leaving Puget Sound for the outer coast. Three of these sharks returned to Puget Sound. For most large marine predators, we have a limited understanding of how they move through their environment, and this clouds our ability to successfully manage their populations and their communities. With detailed movement information, such as that being uncovered with acoustic monitoring, we can begin to quantify the spatial and temporal impacts of large predators within the framework of their ecosystems.
Solution to the SLAM problem in low dynamic environments using a pose graph and an RGB-D sensor.
Lee, Donghwa; Myung, Hyun
2014-07-11
In this study, we propose a solution to the simultaneous localization and mapping (SLAM) problem in low dynamic environments by using a pose graph and an RGB-D (red-green-blue depth) sensor. The low dynamic environments refer to situations in which the positions of objects change over long intervals. Therefore, in the low dynamic environments, robots have difficulty recognizing the repositioning of objects unlike in highly dynamic environments in which relatively fast-moving objects can be detected using a variety of moving object detection algorithms. The changes in the environments then cause groups of false loop closing when the same moved objects are observed for a while, which means that conventional SLAM algorithms produce incorrect results. To address this problem, we propose a novel SLAM method that handles low dynamic environments. The proposed method uses a pose graph structure and an RGB-D sensor. First, to prune the falsely grouped constraints efficiently, nodes of the graph, that represent robot poses, are grouped according to the grouping rules with noise covariances. Next, false constraints of the pose graph are pruned according to an error metric based on the grouped nodes. The pose graph structure is reoptimized after eliminating the false information, and the corrected localization and mapping results are obtained. The performance of the method was validated in real experiments using a mobile robot system.
Orbital Monitoring of the AstraLux Large M-dwarf Multiplicity Sample
NASA Astrophysics Data System (ADS)
Janson, Markus; Bergfors, Carolina; Brandner, Wolfgang; Bonnefoy, Mickaël; Schlieder, Joshua; Köhler, Rainer; Hormuth, Felix; Henning, Thomas; Hippler, Stefan
2014-10-01
Orbital monitoring of M-type binaries is essential for constraining their fundamental properties. This is particularly useful in young systems, where the extended pre-main-sequence evolution can allow for precise isochronal dating. Here, we present the continued astrometric monitoring of the more than 200 binaries of the AstraLux Large Multiplicity Survey, building both on our previous work, archival data, and new astrometric data spanning the range of 2010-2012. The sample is very young overall—all included stars have known X-ray emission, and a significant fraction (18%) of them have recently also been identified as members of young moving groups in the solar neighborhood. We identify ~30 targets that both have indications of being young and for which an orbit either has been closed or appears possible to close in a reasonable time frame (a few years to a few decades). One of these cases, GJ 4326, is, however, identified as probably being substantially older than has been implied from its apparent moving group membership, based on astrometric and isochronal arguments. With further astrometric monitoring, these targets will provide a set of empirical isochrones, against which theoretical isochrones can be calibrated, and which can be used to evaluate the precise ages of nearby young moving groups. Based on observations collected at the European Southern Observatory, Chile, under observing programs 081.C-0314(A), 082.C-0053(A), and 084.C-0812(A), and on observations collected at the Centro Astronómico Hispano Alemán (CAHA) at Calar Alto, operated jointly by the Max-Planck Institute for Astronomy and the Instituto de Astrofísica de Andalucía (CSIC).
Wireless sensor networks for heritage object deformation detection and tracking algorithm.
Xie, Zhijun; Huang, Guangyan; Zarei, Roozbeh; He, Jing; Zhang, Yanchun; Ye, Hongwu
2014-10-31
Deformation is the direct cause of heritage object collapse. It is significant to monitor and signal the early warnings of the deformation of heritage objects. However, traditional heritage object monitoring methods only roughly monitor a simple-shaped heritage object as a whole, but cannot monitor complicated heritage objects, which may have a large number of surfaces inside and outside. Wireless sensor networks, comprising many small-sized, low-cost, low-power intelligent sensor nodes, are more useful to detect the deformation of every small part of the heritage objects. Wireless sensor networks need an effective mechanism to reduce both the communication costs and energy consumption in order to monitor the heritage objects in real time. In this paper, we provide an effective heritage object deformation detection and tracking method using wireless sensor networks (EffeHDDT). In EffeHDDT, we discover a connected core set of sensor nodes to reduce the communication cost for transmitting and collecting the data of the sensor networks. Particularly, we propose a heritage object boundary detecting and tracking mechanism. Both theoretical analysis and experimental results demonstrate that our EffeHDDT method outperforms the existing methods in terms of network traffic and the precision of the deformation detection.
Wireless Sensor Networks for Heritage Object Deformation Detection and Tracking Algorithm
Xie, Zhijun; Huang, Guangyan; Zarei, Roozbeh; He, Jing; Zhang, Yanchun; Ye, Hongwu
2014-01-01
Deformation is the direct cause of heritage object collapse. It is significant to monitor and signal the early warnings of the deformation of heritage objects. However, traditional heritage object monitoring methods only roughly monitor a simple-shaped heritage object as a whole, but cannot monitor complicated heritage objects, which may have a large number of surfaces inside and outside. Wireless sensor networks, comprising many small-sized, low-cost, low-power intelligent sensor nodes, are more useful to detect the deformation of every small part of the heritage objects. Wireless sensor networks need an effective mechanism to reduce both the communication costs and energy consumption in order to monitor the heritage objects in real time. In this paper, we provide an effective heritage object deformation detection and tracking method using wireless sensor networks (EffeHDDT). In EffeHDDT, we discover a connected core set of sensor nodes to reduce the communication cost for transmitting and collecting the data of the sensor networks. Particularly, we propose a heritage object boundary detecting and tracking mechanism. Both theoretical analysis and experimental results demonstrate that our EffeHDDT method outperforms the existing methods in terms of network traffic and the precision of the deformation detection. PMID:25365458
Reference Directions and Reference Objects in Spatial Memory of a Briefly Viewed Layout
ERIC Educational Resources Information Center
Mou, Weimin; Xiao, Chengli; McNamara, Timothy P.
2008-01-01
Two experiments investigated participants' spatial memory of a briefly viewed layout. Participants saw an array of five objects on a table and, after a short delay, indicated whether the target object indicated by the experimenter had been moved. Experiment 1 showed that change detection was more accurate when non-target objects were stationary…
Magnetically Operated Holding Plate And Ball-Lock Pin
NASA Technical Reports Server (NTRS)
Monford, Leo G., Jr.
1992-01-01
Magnetically operated holding plate and ball-locking-pin mechanism part of object attached to, or detached from second object. Mechanism includes tubular housing inserted in hole in second object. Plunger moves inside tube forcing balls to protrude from sides. Balls prevent tube from sliding out of second object. Simpler, less expensive than motorized latches; suitable for robotics applications.
The effect of sinusoidal rolling ground motion on lifting biomechanics.
Ning, Xiaopeng; Mirka, Gary A
2010-12-01
The objective of this study was to quantify the effects of ground surface motion on the biomechanical responses of a person performing a lifting task. A boat motion simulator (BMS) was built to provide a sinusoidal ground motion (simultaneous vertical linear translation and a roll angular displacement) that simulates the deck motion on a small fishing boat. Sixteen participants performed lifting, lowering and static holding tasks under conditions of two levels of mass (5 and 10 kg) and five ground moving conditions. Each ground moving condition was specified by its ground angular displacement and instantaneous vertical acceleration: A): +6°, -0.54 m/s(2); B): +3°, -0.27 m/s(2); C): 0°, 0m/s(2); D): -3°, 0.27 m/s(2); and E): -6°, 0.54 m/s(2). As they performed these tasks, trunk kinematics were captured using the lumbar motion monitor and trunk muscle activities were evaluated through surface electromyography. The results showed that peak sagittal plane angular acceleration was significantly higher in Condition A than in Conditions C, D and E (698°/s(2) vs. 612-617°/s(2)) while peak sagittal plane angular deceleration during lowering was significantly higher in moving conditions (conditions A and E) than in the stationary condition C (538-542°/s(2) vs. 487°/s(2)). The EMG results indicate that the boat motions tend to amplify the effects of the slant of the lifting surface and the external oblique musculature plays an important role in stabilizing the torso during these dynamic lifting tasks. Copyright © 2010 Elsevier Ltd. All rights reserved.
A Rapidly Moving Shell in the Orion Nebula
NASA Technical Reports Server (NTRS)
Walter, Donald K.; O'Dell, C. R.; Hu, Xihai; Dufour, Reginald J.
1995-01-01
A well-resolved elliptical shell in the inner Orion Nebula has been investigated by monochromatic imaging plus high- and low-resolution spectroscopy. We find that it is of low ionization and the two bright ends are moving at -39 and -49 km/s with respect to OMC-1. There is no central object, even in the infrared J bandpass although H2 emission indicates a possible association with the nearby very young pre-main-sequence star J&W 352, which is one of the youngest pre-main-sequence stars in the inner Orion Nebula. Many of the characteristics of this object (low ionization, blue shift) are like those of the Herbig-Haro objects, although the symmetric form would make it an unusual member of that class.
Automatic Recognition Of Moving Objects And Its Application To A Robot For Picking Asparagus
NASA Astrophysics Data System (ADS)
Baylou, P.; Amor, B. El Hadj; Bousseau, G.
1983-10-01
After a brief description of the robot for picking white asparagus, a statistical study of the different shapes of asparagus tips allowed us to determine certain discriminating parameters to detect the tips as they appear on the silhouette of the mound of earth. The localisation was done stereometrically with the help of two cameras. As the robot carrying the system of vision-localisation moves, the images are altered and decision cri-teria modified. A study of the image from mobile objects produced by both tube and CCD came-ras was carried out. A simulation of this phenomenon has been achieved in order to determine the modifications concerning object shapes, thresholding levels and decision parameters in function of the robot speed.
Efficient spatial privacy preserving scheme for sensor network
NASA Astrophysics Data System (ADS)
Debnath, Ashmita; Singaravelu, Pradheepkumar; Verma, Shekhar
2013-03-01
The privacy of sensitive events observed by a wireless sensor networks (WSN) needs to be protected. Adversaries with the knowledge of sensor deployment and network protocols can infer the location of a sensed event by monitoring the communication from the sensors even when the messages are encrypted. Encryption provides confidentiality; however, the context of the event can used to breach the privacy of sensed objects. An adversary can track the trajectory of a moving object or determine the location of the occurrence of a critical event to breach its privacy. In this paper, we propose ring signature to obfuscate the spatial information. Firstly, the extended region of location of an event of interest as estimated from a sensor communication is presented. Then, the increase in this region of spatial uncertainty due to the effect of ring signature is determined. We observe that ring signature can effectively enhance the region of location uncertainty of a sensed event. As the event of interest can be situated anywhere in the enhanced region of uncertainty, its privacy against local or global adversary is ensured. Both analytical and simulation results show that induced delay and throughput are insignificant with negligible impact on the performance of a WSN.
Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.
Stone, Scott A; Tata, Matthew S
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.
The Two Moons of Mars As Seen from 'Husband Hill'
NASA Technical Reports Server (NTRS)
2005-01-01
Taking advantage of extra solar energy collected during the day, NASA's Mars Exloration Rover Spirit settled in for an evening of stargazing, photographing the two moons of Mars as they crossed the night sky. Spirit took this succession of images at 150-second intervals from a perch atop 'Husband Hill' in Gusev Crater on martian day, or sol, 594 (Sept. 4, 2005), as the faster-moving martian moon Phobos was passing Deimos in the night sky. Phobos is the brighter object on the left and Deimos is the dimmer object on the right. The bright star Aldebaran and some other stars in the constellation Taurus are visible as star trails. Most of the other streaks in the image are the result of cosmic rays lighting up random groups of pixels in the camera. Scientists will use images of the two moons to better map their orbital positions, learn more about their composition, and monitor the presence of nighttime clouds or haze. Spirit took the five images that make up this c omposite with its panoramic camera using the camera's broadband filter, which was designed specifically for acquiring images under low-light conditions.Rendering visual events as sounds: Spatial attention capture by auditory augmented reality
Tata, Matthew S.
2017-01-01
Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518
NASA Astrophysics Data System (ADS)
Hall, Justin R.; Hastrup, Rolf C.
The United States Space Exploration Initiative (SEI) calls for the charting of a new and evolving manned course to the Moon, Mars, and beyond. This paper discusses key challenges in providing effective deep space telecommunications, navigation, and information management (TNIM) architectures and designs for Mars exploration support. The fundamental objectives are to provide the mission with means to monitor and control mission elements, acquire engineering, science, and navigation data, compute state vectors and navigate, and move these data efficiently and automatically between mission nodes for timely analysis and decision-making. Although these objectives do not depart, fundamentally, from those evolved over the past 30 years in supporting deep space robotic exploration, there are several new issues. This paper focuses on summarizing new requirements, identifying related issues and challenges, responding with concepts and strategies which are enabling, and, finally, describing candidate architectures, and driving technologies. The design challenges include the attainment of: 1) manageable interfaces in a large distributed system, 2) highly unattended operations for in-situ Mars telecommunications and navigation functions, 3) robust connectivity for manned and robotic links, 4) information management for efficient and reliable interchange of data between mission nodes, and 5) an adequate Mars-Earth data rate.
Swing-free transport of suspended loads. Summer research report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basher, A.M.H.
1996-02-01
Transportation of large objects using traditional bridge crane can induce pendulum motion (swing) of the object. In environments such as factory the energy contained in the swinging mass can be large and therefore attempts to move the mass onto target while still swinging can cause considerable damage. Oscillations must be damped or allowed to decay before the next process can take place. Stopping the swing can be accomplished by moving the bridge in a manner to counteract the swing which sometimes can be done by skilled operator, or by waiting for the swing to damp sufficiently that the object canmore » be moved to the target without risk of damage. One of the methods that can be utilized for oscillation suppression is input preshaping. The validity of this method depends on the exact knowledge of the system dynamics. This method can be modified to provide some degrees of robustness with respect to unknown dynamics but at the cost of the speed of transient response. This report describes investigations on the development of a controller to dampen the oscillations.« less
Study of moving object detecting and tracking algorithm for video surveillance system
NASA Astrophysics Data System (ADS)
Wang, Tao; Zhang, Rongfu
2010-10-01
This paper describes a specific process of moving target detecting and tracking in the video surveillance.Obtain high-quality background is the key to achieving differential target detecting in the video surveillance.The paper is based on a block segmentation method to build clear background,and using the method of background difference to detecing moving target,after a series of treatment we can be extracted the more comprehensive object from original image,then using the smallest bounding rectangle to locate the object.In the video surveillance system, the delay of camera and other reasons lead to tracking lag,the model of Kalman filter based on template matching was proposed,using deduced and estimated capacity of Kalman,the center of smallest bounding rectangle for predictive value,predicted the position in the next moment may appare,followed by template matching in the region as the center of this position,by calculate the cross-correlation similarity of current image and reference image,can determine the best matching center.As narrowed the scope of searching,thereby reduced the searching time,so there be achieve fast-tracking.
Huang, W.; Zheng, Lingyun; Zhan, X.
2002-01-01
Accurate modelling of groundwater flow and transport with sharp moving fronts often involves high computational cost, when a fixed/uniform mesh is used. In this paper, we investigate the modelling of groundwater problems using a particular adaptive mesh method called the moving mesh partial differential equation approach. With this approach, the mesh is dynamically relocated through a partial differential equation to capture the evolving sharp fronts with a relatively small number of grid points. The mesh movement and physical system modelling are realized by solving the mesh movement and physical partial differential equations alternately. The method is applied to the modelling of a range of groundwater problems, including advection dominated chemical transport and reaction, non-linear infiltration in soil, and the coupling of density dependent flow and transport. Numerical results demonstrate that sharp moving fronts can be accurately and efficiently captured by the moving mesh approach. Also addressed are important implementation strategies, e.g. the construction of the monitor function based on the interpolation error, control of mesh concentration, and two-layer mesh movement. Copyright ?? 2002 John Wiley and Sons, Ltd.
Problems in depth perception : a method of simulating objects moving in depth.
DOT National Transportation Integrated Search
1965-12-01
Equations were developed for the simulation on a screen of the movement of an object or surface toward or away from an observer by the movement of a positive photographic transparency of the object or surface away or toward a point source. The genera...
ERIC Educational Resources Information Center
Gogate, Lakshmi J.; Bolzani, Laura H.; Betancourt, Eugene A.
2006-01-01
We examined whether mothers' use of temporal synchrony between spoken words and moving objects, and infants' attention to object naming, predict infants' learning of word-object relations. Following 5 min of free play, 24 mothers taught their 6- to 8-month-olds the names of 2 toy objects, "Gow" and "Chi," during a 3-min play…
Physical Activity Monitoring: Gadgets and Uses. Article #6 in a 6-Part Series
ERIC Educational Resources Information Center
Mears, Derrick
2010-01-01
An early 15th century drawing by Leonardo da Vinci depicted a device that used gears and a pendulum that moved in synchronization with the wearer as he or she walked. This is believed to be the early origins of today's physical activity monitoring devices. Today's devices have vastly expanded on da Vinci's ancient concept with a myriad of options…
Supervisory Control of Remote Manipulation with Compensation for Moving Target.
1980-07-21
Continue on reveree aide if neceeary and Identify by block number) ’The aim of this project is to evaluate automatic compensation for moving tar- gets ...slave control. Operating manipulators in this way is a tiring job and the operator gets exhausted after j a short time of work. The use of the computer...THE MANIPULATION OF MOVING OBJECTS Undersea tasks done by human divers are getting more and more costly and hazardous as they have to be done at
Moving Particles Through a Finite Element Mesh
Peskin, Adele P.; Hardin, Gary R.
1998-01-01
We present a new numerical technique for modeling the flow around multiple objects moving in a fluid. The method tracks the dynamic interaction between each particle and the fluid. The movements of the fluid and the object are directly coupled. A background mesh is designed to fit the geometry of the overall domain. The mesh is designed independently of the presence of the particles except in terms of how fine it must be to track particles of a given size. Each particle is represented by a geometric figure that describes its boundary. This figure overlies the mesh. Nodes are added to the mesh where the particle boundaries intersect the background mesh, increasing the number of nodes contained in each element whose boundary is intersected. These additional nodes are then used to describe and track the particle in the numerical scheme. Appropriate element shape functions are defined to approximate the solution on the elements with extra nodes. The particles are moved through the mesh by moving only the overlying nodes defining the particles. The regular finite element grid remains unchanged. In this method, the mesh does not distort as the particles move. Instead, only the placement of particle-defining nodes changes as the particles move. Element shape functions are updated as the nodes move through the elements. This method is especially suited for models of moderate numbers of moderate-size particles, where the details of the fluid-particle coupling are important. Both the complications of creating finite element meshes around appreciable numbers of particles, and extensive remeshing upon movement of the particles are simplified in this method. PMID:28009377
Gergely, Anna; Petró, Eszter; Topál, József; Miklósi, Ádám
2013-01-01
Robots offer new possibilities for investigating animal social behaviour. This method enhances controllability and reproducibility of experimental techniques, and it allows also the experimental separation of the effects of bodily appearance (embodiment) and behaviour. In the present study we examined dogs' interactive behaviour in a problem solving task (in which the dog has no access to the food) with three different social partners, two of which were robots and the third a human behaving in a robot-like manner. The Mechanical UMO (Unidentified Moving Object) and the Mechanical Human differed only in their embodiment, but showed similar behaviour toward the dog. In contrast, the Social UMO was interactive, showed contingent responsiveness and goal-directed behaviour and moved along varied routes. The dogs showed shorter looking and touching duration, but increased gaze alternation toward the Mechanical Human than to the Mechanical UMO. This suggests that dogs' interactive behaviour may have been affected by previous experience with typical humans. We found that dogs also looked longer and showed more gaze alternations between the food and the Social UMO compared to the Mechanical UMO. These results suggest that dogs form expectations about an unfamiliar moving object within a short period of time and they recognise some social aspects of UMOs' behaviour. This is the first evidence that interactive behaviour of a robot is important for evoking dogs' social responsiveness.
Leving, Marika T; Horemans, Henricus L D; Vegter, Riemer J K; de Groot, Sonja; Bussmann, Johannes B J; van der Woude, Lucas H V
2018-01-01
Hypoactive lifestyle contributes to the development of secondary complications and lower quality of life in wheelchair users. There is a need for objective and user-friendly physical activity monitors for wheelchair-dependent individuals in order to increase physical activity through self-monitoring, goal setting, and feedback provision. To determine the validity of Activ8 Activity Monitors to 1) distinguish two classes of activities: independent wheelchair propulsion from other non-propulsive wheelchair-related activities 2) distinguish five wheelchair-related classes of activities differing by the movement intensity level: sitting in a wheelchair (hands may be moving but wheelchair remains stationary), maneuvering, and normal, high speed or assisted wheelchair propulsion. Sixteen able-bodied individuals performed sixteen various standardized 60s-activities of daily living. Each participant was equipped with a set of two Activ8 Professional Activity Monitors, one at the right forearm and one at the right wheel. Task classification by the Active8 Monitors was validated using video recordings. For the overall agreement, sensitivity and positive predictive value, outcomes above 90% are considered excellent, between 70 and 90% good, and below 70% unsatisfactory. Division in two classes resulted in overall agreement of 82.1%, sensitivity of 77.7% and positive predictive value of 78.2%. 84.5% of total duration of all tasks was classified identically by Activ8 and based on the video material. Division in five classes resulted in overall agreement of 56.6%, sensitivity of 52.8% and positive predictive value of 51.9%. 59.8% of total duration of all tasks was classified identically by Activ8 and based on the video material. Activ8 system proved to be suitable for distinguishing between active wheelchair propulsion and other non-propulsive wheelchair-related activities. The ability of the current system and algorithms to distinguish five various wheelchair-related activities is unsatisfactory.
Martin, Anne; Adams, Jacob M; Bunn, Christopher; Gill, Jason M R; Gray, Cindy M; Hunt, Kate; Maxwell, Douglas J; van der Ploeg, Hidde P; Wyke, Sally
2017-01-01
Objectives Time spent inactive and sedentary are both associated with poor health. Self-monitoring of walking, using pedometers for real-time feedback, is effective at increasing physical activity. This study evaluated the feasibility of a new pocket-worn sedentary time and physical activity real-time self-monitoring device (SitFIT). Methods Forty sedentary men were equally randomised into two intervention groups. For 4 weeks, one group received a SitFIT providing feedback on steps and time spent sedentary (lying/sitting); the other group received a SitFIT providing feedback on steps and time spent upright (standing/stepping). Change in sedentary time, standing time, stepping time and step count was assessed using activPAL monitors at baseline, 4-week follow-up (T1) and 12-week (T2) follow-up. Semistructured interviews were conducted after 4 and 12 weeks. Results The SitFIT was reported as acceptable and usable and seen as a motivating tool to reduce sedentary time by both groups. On average, participants reduced their sedentary time by 7.8 minutes/day (95% CI −55.4 to 39.7) (T1) and by 8.2 minutes/day (95% CI −60.1 to 44.3) (T2). They increased standing time by 23.2 minutes/day (95% CI 4.0 to 42.5) (T1) and 16.2 minutes/day (95% CI −13.9 to 46.2) (T2). Stepping time was increased by 8.5 minutes/day (95% CI 0.9 to 16.0) (T1) and 9.0 minutes/day (95% CI 0.5 to 17.5) (T2). There were no between-group differences at either follow-up time points. Conclusion The SitFIT was perceived as a useful tool for self-monitoring of sedentary time. It has potential as a real-time self-monitoring device to reduce sedentary and increase upright time. PMID:29081985
Perentos, N; Nicol, A U; Martins, A Q; Stewart, J E; Taylor, P; Morton, A J
2017-03-01
Large mammals with complex central nervous systems offer new possibilities for translational research into basic brain function. Techniques for monitoring brain activity in large mammals, however, are not as well developed as they are in rodents. We have developed a method for chronic monitoring of electroencephalographic (EEG) activity in unrestrained sheep. We describe the methods for behavioural training prior to implantation, surgical procedures for implantation, a protocol for reliable anaesthesia and recovery, methods for EEG data collection, as well as data pertaining to suitability and longevity of different types of electrodes. Sheep tolerated all procedures well, and surgical complications were minimal. Electrode types used included epidural and subdural screws, intracortical needles and subdural disk electrodes, with the latter producing the best and most reliable results. The implants yielded longitudinal EEG data of consistent quality for periods of at least a year, and in some cases up to 2 years. This is the first detailed methodology to be described for chronic brain function monitoring in freely moving unrestrained sheep. The developed method will be particularly useful in chronic investigations of brain activity during normal behaviour that can include sleep, learning and memory. As well, within the context of disease, the method can be used to monitor brain pathology or the progress of therapeutic trials in transgenic or natural disease models in sheep. Copyright © 2016 Elsevier B.V. All rights reserved.
Taking the Plunge: Districts Leap into Virtualization
ERIC Educational Resources Information Center
Demski, Jennifer
2010-01-01
Moving from a traditional desktop computing environment to a virtualized solution is a daunting task. In this article, the author presents case histories of three districts that have made the conversion to virtual computing to learn about their experiences: What prompted them to make the move, and what were their objectives? Which obstacles prove…
Motor Effects from Visually Induced Disorientation in Man.
ERIC Educational Resources Information Center
Brecher, M. Herbert; Brecher, Gerhard A.
The problem of disorientation in a moving optical environment was examined. A pilot can experience egocentric disorientation if the entire visual environment moves relative to his body without a clue as to the objectives position of the airplane in respect to the ground. A simple method of measuring disorientation was devised. In this method…
Object-Oriented Analysis and Design of the Saber Wargame
1991-12-01
coordinates for an aircraft package moving right when the starting x value is even. It is called by GO.NE, GOSE , and GONW. 108 - MOVERIGHTODD. This... GOSE , and GONW. - MOVEUPRIGHTEVEN. This procedure determines the new x and y air hex coordinates for an aircraft package moving up and right when the
Camouflaging moving objects: crypsis and masquerade.
Hall, Joanna R; Baddeley, Roland; Scott-Samuel, Nicholas E; Shohet, Adam J; Cuthill, Innes C
2017-01-01
Motion is generally assumed to "break" camouflage. However, although camouflage cannot conceal a group of moving animals, it may impair a predator's ability to single one out for attack, even if that discrimination is not based on a color difference. Here, we use a computer-based task in which humans had to detect the odd one out among moving objects, with "oddity" based on shape. All objects were either patterned or plain, and either matched the background or not. We show that there are advantages of matching both group-mates and the background. However, when patterned objects are on a plain background (i.e., no background matching), the advantage of being among similarly patterned distractors is only realized when the group size is larger (10 compared to 5). In a second experiment, we present a paradigm for testing how coloration interferes with target-distractor discrimination, based on an adaptive staircase procedure for establishing the threshold. We show that when the predator only has a short time for decision-making, displaying a similar pattern to the distractors and the background affords protection even when the difference in shape between target and distractors is large. We conclude that, even though motion breaks camouflage, being camouflaged could help group-living animals reduce the risk of being singled out for attack by predators.
Camouflaging moving objects: crypsis and masquerade
Hall, Joanna R; Baddeley, Roland; Scott-Samuel, Nicholas E; Shohet, Adam J; Cuthill, Innes C
2017-01-01
Abstract Motion is generally assumed to “break” camouflage. However, although camouflage cannot conceal a group of moving animals, it may impair a predator’s ability to single one out for attack, even if that discrimination is not based on a color difference. Here, we use a computer-based task in which humans had to detect the odd one out among moving objects, with “oddity” based on shape. All objects were either patterned or plain, and either matched the background or not. We show that there are advantages of matching both group-mates and the background. However, when patterned objects are on a plain background (i.e., no background matching), the advantage of being among similarly patterned distractors is only realized when the group size is larger (10 compared to 5). In a second experiment, we present a paradigm for testing how coloration interferes with target-distractor discrimination, based on an adaptive staircase procedure for establishing the threshold. We show that when the predator only has a short time for decision-making, displaying a similar pattern to the distractors and the background affords protection even when the difference in shape between target and distractors is large. We conclude that, even though motion breaks camouflage, being camouflaged could help group-living animals reduce the risk of being singled out for attack by predators. PMID:29622927
Contextual effects on smooth-pursuit eye movements.
Spering, Miriam; Gegenfurtner, Karl R
2007-02-01
Segregating a moving object from its visual context is particularly relevant for the control of smooth-pursuit eye movements. We examined the interaction between a moving object and a stationary or moving visual context to determine the role of the context motion signal in driving pursuit. Eye movements were recorded from human observers to a medium-contrast Gaussian dot that moved horizontally at constant velocity. A peripheral context consisted of two vertically oriented sinusoidal gratings, one above and one below the stimulus trajectory, that were either stationary or drifted into the same or opposite direction as that of the target at different velocities. We found that a stationary context impaired pursuit acceleration and velocity and prolonged pursuit latency. A drifting context enhanced pursuit performance, irrespective of its motion direction. This effect was modulated by context contrast and orientation. When a context was briefly perturbed to move faster or slower eye velocity changed accordingly, but only when the context was drifting along with the target. Perturbing a context into the direction orthogonal to target motion evoked a deviation of the eye opposite to the perturbation direction. We therefore provide evidence for the use of absolute and relative motion cues, or motion assimilation and motion contrast, for the control of smooth-pursuit eye movements.
Small Moving Vehicle Detection in a Satellite Video of an Urban Area
Yang, Tao; Wang, Xiwen; Yao, Bowei; Li, Jing; Zhang, Yanning; He, Zhannan; Duan, Wencheng
2016-01-01
Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels and exhibit low contrast with respect to the background, which makes it hard to get available appearance or shape information. In this paper, we look into the problem of moving vehicle detection in satellite imagery. To the best of our knowledge, it is the first time to deal with moving vehicle detection from satellite videos. Our approach consists of two stages: first, through foreground motion segmentation and trajectory accumulation, the scene motion heat map is dynamically built. Following this, a novel saliency based background model which intensifies moving objects is presented to segment the vehicles in the hot regions. Qualitative and quantitative experiments on sequence from a recent Skybox satellite video dataset demonstrates that our approach achieves a high detection rate and low false alarm simultaneously. PMID:27657091
Cellini, Cristiano; Scocchia, Lisa; Drewing, Knut
2016-10-01
In the flash-lag illusion, a brief visual flash and a moving object presented at the same location appear to be offset with the flash trailing the moving object. A considerable amount of studies investigated the visual flash-lag effect, and flash-lag-like effects have also been observed in audition, and cross-modally between vision and audition. In the present study, we investigate whether a similar effect can also be observed when using only haptic stimuli. A fast vibration (or buzz, lasting less than 20 ms) was applied to the moving finger of the observers and employed as a "haptic flash." Participants performed a two-alternative forced-choice (2AFC) task where they had to judge whether the moving finger was located to the right or to the left of the stationary finger at the time of the buzz. We used two different movement velocities (Slow and Fast conditions). We found that the moving finger was systematically misperceived to be ahead of the stationary finger when the two were physically aligned. This result can be interpreted as a purely haptic analogue of the flash-lag effect, which we refer to as "buzz-lag effect." The buzz-lag effect can be well accounted for by the temporal-sampling explanation of flash-lag-like effects.
NASA Astrophysics Data System (ADS)
Pan, Chu-Dong; Yu, Ling; Liu, Huan-Lin
2017-08-01
Traffic-induced moving force identification (MFI) is a typical inverse problem in the field of bridge structural health monitoring. Lots of regularization-based methods have been proposed for MFI. However, the MFI accuracy obtained from the existing methods is low when the moving forces enter into and exit a bridge deck due to low sensitivity of structural responses to the forces at these zones. To overcome this shortcoming, a novel moving average Tikhonov regularization method is proposed for MFI by combining with the moving average concepts. Firstly, the bridge-vehicle interaction moving force is assumed as a discrete finite signal with stable average value (DFS-SAV). Secondly, the reasonable signal feature of DFS-SAV is quantified and introduced for improving the penalty function (∣∣x∣∣2 2) defined in the classical Tikhonov regularization. Then, a feasible two-step strategy is proposed for selecting regularization parameter and balance coefficient defined in the improved penalty function. Finally, both numerical simulations on a simply-supported beam and laboratory experiments on a hollow tube beam are performed for assessing the accuracy and the feasibility of the proposed method. The illustrated results show that the moving forces can be accurately identified with a strong robustness. Some related issues, such as selection of moving window length, effect of different penalty functions, and effect of different car speeds, are discussed as well.
NASA Technical Reports Server (NTRS)
Baxter, W. J., Jr.; Frant, M. S.; West, S. J.
1978-01-01
Solid-state sensing unit developed for use with NASA's Water-Quality Monitoring System can detect small velocity changes in slow moving fluid. Nonprotruding sensor is applicable to numerous other uses requiring sensitive measurement of slow flows.
Connections 2030 performance monitoring.
DOT National Transportation Integrated Search
2011-01-01
The Wisconsin Department of Transportation (WisDOT) recently updated its long-range : multimodal plan. This plan, referred to as Connections 2030, provides a policy framework for : moving towards a safer and more efficient transportation system that ...
Neonatal heart rate prediction.
Abdel-Rahman, Yumna; Jeremic, Aleksander; Tan, Kenneth
2009-01-01
Technological advances have caused a decrease in the number of infant deaths. Pre-term infants now have a substantially increased chance of survival. One of the mechanisms that is vital to saving the lives of these infants is continuous monitoring and early diagnosis. With continuous monitoring huge amounts of data are collected with so much information embedded in them. By using statistical analysis this information can be extracted and used to aid diagnosis and to understand development. In this study we have a large dataset containing over 180 pre-term infants whose heart rates were recorded over the length of their stay in the Neonatal Intensive Care Unit (NICU). We test two types of models, empirical bayesian and autoregressive moving average. We then attempt to predict future values. The autoregressive moving average model showed better results but required more computation.
Automatic monitoring of vibration welding equipment
Spicer, John Patrick; Chakraborty, Debejyo; Wincek, Michael Anthony; Wang, Hui; Abell, Jeffrey A; Bracey, Jennifer; Cai, Wayne W
2014-10-14
A vibration welding system includes vibration welding equipment having a welding horn and anvil, a host device, a check station, and a robot. The robot moves the horn and anvil via an arm to the check station. Sensors, e.g., temperature sensors, are positioned with respect to the welding equipment. Additional sensors are positioned with respect to the check station, including a pressure-sensitive array. The host device, which monitors a condition of the welding equipment, measures signals via the sensors positioned with respect to the welding equipment when the horn is actively forming a weld. The robot moves the horn and anvil to the check station, activates the check station sensors at the check station, and determines a condition of the welding equipment by processing the received signals. Acoustic, force, temperature, displacement, amplitude, and/or attitude/gyroscopic sensors may be used.
Code of Federal Regulations, 2010 CFR
2010-04-01
... sitting, standing, walking, lifting, carrying, handling objects, hearing, speaking, and traveling; and, in..., moving about and manipulating objects, caring for yourself, and health and physical well-being. Although...
Dynamic NMDAR-mediated properties of place cells during the object place memory task.
Faust, Thomas W; Robbiati, Sergio; Huerta, Tomás S; Huerta, Patricio T
2013-01-01
N-methyl-D-aspartate receptors (NMDAR) in the hippocampus participate in encoding and recalling the location of objects in the environment, but the ensemble mechanisms by which NMDARs mediate these processes have not been completely elucidated. To address this issue, we examined the firing patterns of place cells in the dorsal CA1 area of the hippocampus of mice (n = 7) that performed an object place memory (OPM) task, consisting of familiarization (T1), sample (T2), and choice (T3) trials, after systemic injection of 3-[(±)2-carboxypiperazin-4yl]propyl-1-phosphate (CPP), a specific NMDAR antagonist. Place cell properties under CPP (CPP-PCs) were compared to those after control saline injection (SAL-PCs) in the same mice. We analyzed place cells across the OPM task to determine whether they signaled the introduction or movement of objects by NMDAR-mediated changes of their spatial coding. On T2, when two objects were first introduced to a familiar chamber, CPP-PCs and SAL-PCs showed stable, vanishing or moving place fields in addition to changes in spatial information (SI). These metrics were comparable between groups. Remarkably, previously inactive CPP-PCs (with place fields emerging de novo on T2) had significantly weaker SI increases than SAL-PCs. On T3, when one object was moved, CPP-PCs showed reduced center-of-mass (COM) shift of their place fields. Indeed, a subset of SAL-PCs with large COM shifts (>7 cm) was largely absent in the CPP condition. Notably, for SAL-PCs that exhibited COM shifts, those initially close to the moving object followed the trajectory of the object, whereas those far from the object did the opposite. Our results strongly suggest that the SI changes and COM shifts of place fields that occur during the OPM task reflect key dynamic properties that are mediated by NMDARs and might be responsible for binding object identity with location.
Dynamic NMDAR-mediated properties of place cells during the object place memory task
Faust, Thomas W.; Robbiati, Sergio; Huerta, Tomás S.; Huerta, Patricio T.
2013-01-01
N-methyl-D-aspartate receptors (NMDAR) in the hippocampus participate in encoding and recalling the location of objects in the environment, but the ensemble mechanisms by which NMDARs mediate these processes have not been completely elucidated. To address this issue, we examined the firing patterns of place cells in the dorsal CA1 area of the hippocampus of mice (n = 7) that performed an object place memory (OPM) task, consisting of familiarization (T1), sample (T2), and choice (T3) trials, after systemic injection of 3-[(±)2-carboxypiperazin-4yl]propyl-1-phosphate (CPP), a specific NMDAR antagonist. Place cell properties under CPP (CPP–PCs) were compared to those after control saline injection (SAL–PCs) in the same mice. We analyzed place cells across the OPM task to determine whether they signaled the introduction or movement of objects by NMDAR-mediated changes of their spatial coding. On T2, when two objects were first introduced to a familiar chamber, CPP–PCs and SAL–PCs showed stable, vanishing or moving place fields in addition to changes in spatial information (SI). These metrics were comparable between groups. Remarkably, previously inactive CPP–PCs (with place fields emerging de novo on T2) had significantly weaker SI increases than SAL–PCs. On T3, when one object was moved, CPP–PCs showed reduced center-of-mass (COM) shift of their place fields. Indeed, a subset of SAL–PCs with large COM shifts (>7 cm) was largely absent in the CPP condition. Notably, for SAL–PCs that exhibited COM shifts, those initially close to the moving object followed the trajectory of the object, whereas those far from the object did the opposite. Our results strongly suggest that the SI changes and COM shifts of place fields that occur during the OPM task reflect key dynamic properties that are mediated by NMDARs and might be responsible for binding object identity with location. PMID:24381547
Seismic wave generation systems and methods for cased wells
Minto, James [Houston, TX; Sorrells, Martin H [Huffman, TX; Owen, Thomas E [Helotes, TX; Schroeder, Edgar C [San Antonio, TX
2011-03-29
A vibration source (10) includes an armature bar (12) having a major length dimension, and a driver (20A) positioned about the armature bar. The driver (20A) is movably coupled to the armature bar (12), and includes an electromagnet (40). During operation the electromagnet (40) is activated such that the driver (20A) moves with respect to the armature bar (12) and a vibratory signal is generated in the armature bar. A described method for generating a vibratory signal in an object includes positioning the vibration source (10) in an opening of the object, coupling the armature bar (12) to a surface of the object within the opening, and activating the electromagnet (40) of the driver (20A) such that the driver moves with respect to the armature bar (12) and a vibratory signal is generated in the armature bar and the object.
Robust object tacking based on self-adaptive search area
NASA Astrophysics Data System (ADS)
Dong, Taihang; Zhong, Sheng
2018-02-01
Discriminative correlation filter (DCF) based trackers have recently achieved excellent performance with great computational efficiency. However, DCF based trackers suffer boundary effects, which result in the unstable performance in challenging situations exhibiting fast motion. In this paper, we propose a novel method to mitigate this side-effect in DCF based trackers. We change the search area according to the prediction of target motion. When the object moves fast, broad search area could alleviate boundary effects and reserve the probability of locating object. When the object moves slowly, narrow search area could prevent effect of useless background information and improve computational efficiency to attain real-time performance. This strategy can impressively soothe boundary effects in situations exhibiting fast motion and motion blur, and it can be used in almost all DCF based trackers. The experiments on OTB benchmark show that the proposed framework improves the performance compared with the baseline trackers.
C IV absorption-line variability in X-ray-bright broad absorption-line quasi-stellar objects
NASA Astrophysics Data System (ADS)
Joshi, Ravi; Chand, Hum; Srianand, Raghunathan; Majumdar, Jhilik
2014-07-01
We report the kinematic shift and strength variability of the C IV broad absorption-line (BAL) trough in two high-ionization X-ray-bright quasi-stellar objects (QSOs): SDSS J085551+375752 (at zem ˜ 1.936) and SDSS J091127+055054 (at zem ˜ 2.793). Both these QSOs have shown a combination of profile shifts and the appearance and disappearance of absorption components belonging to a single BAL trough. The observed average kinematic shift of the whole BAL profile resulted in an average deceleration of ˜-0.7 ± 0.1, -2.0 ± 0.1 cm s-2 over rest-frame time-spans of 3.11 and 2.34 yr for SDSS J085551+375752 and SDSS J091127+055054, respectively. To our knowledge, these are the largest kinematic shifts known, exceeding by factors of about 2.8 and 7.8 the highest deceleration reported in the literature; this makes both objects potential candidates to investigate outflows using multiwavelength monitoring of their line and continuum variability. We explore various possible mechanisms to understand the observed profile variations. Outflow models involving many small self-shielded clouds, probably moving in a curved path, provide the simplest explanation for the C IV BAL strength and velocity variations, along with the X-ray-bright nature of these sources.
Misra, A; Burke, JF; Ramayya, A; Jacobs, J; Sperling, MR; Moxon, KA; Kahana, MJ; Evans, JJ; Sharan, AD
2014-01-01
Objective The authors report methods developed for the implantation of micro-wire bundles into mesial temporal lobe structures and subsequent single neuron recording in epileptic patients undergoing in-patient diagnostic monitoring. This is done with the intention of lowering the perceived barriers to routine single neuron recording from deep brain structures in the clinical setting. Approach Over a 15 month period, 11 patients were implanted with platinum micro-wire bundles into mesial temporal structures. Protocols were developed for A) monitoring electrode integrity through impedance testing, B) ensuring continuous 24-7 recording, C) localizing micro-wire position and “splay” pattern and D) monitoring grounding and referencing to maintain the quality of recordings. Main Result Five common modes of failure were identified: 1) broken micro-wires from acute tensile force, 2) broken micro-wires from cyclic fatigue at stress points, 3) poor in-vivo micro-electrode separation, 4) motion artifact and 5) deteriorating ground connection and subsequent drop in common mode noise rejection. Single neurons have been observed up to 14 days post implantation and on 40% of micro-wires. Significance Long-term success requires detailed review of each implant by both the clinical and research teams to identify failure modes, and appropriate refinement of techniques while moving forward. This approach leads to reliable unit recordings without prolonging operative times, which will help increase the availability and clinical viability of human single neuron data. PMID:24608589
Final Technical Report: Development of Post-Installation Monitoring Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polagye, Brian
2014-03-31
The development of approaches to harness marine and hydrokinetic energy at large-scale is predicated on the compatibility of these generation technologies with the marine environment. At present, aspects of this compatibility are uncertain. Demonstration projects provide an opportunity to address these uncertainties in a way that moves the entire industry forward. However, the monitoring capabilities to realize these advances are often under-developed in comparison to the marine and hydrokinetic energy technologies being studied. Public Utility District No. 1 of Snohomish County has proposed to deploy two 6-meter diameter tidal turbines manufactured by OpenHydro in northern Admiralty Inlet, Puget Sound, Washington.more » The goal of this deployment is to provide information about the environmental, technical, and economic performance of such turbines that can advance the development of larger-scale tidal energy projects, both in the United States and internationally. The objective of this particular project was to develop environmental monitoring plans in collaboration with resource agencies, while simultaneously advancing the capabilities of monitoring technologies to the point that they could be realistically implemented as part of these plans. In this, the District was joined by researchers at the Northwest National Marine Renewable Energy Center at the University of Washington, Sea Mammal Research Unit, LLC, H.T. Harvey & Associates, and Pacific Northwest National Laboratory. Over a two year period, the project team successfully developed four environmental monitoring and mitigation plans that were adopted as a condition of the operating license for the demonstration project that issued by the Federal Energy Regulatory Commission in March 2014. These plans address nearturbine interactions with marine animals, the sound produced by the turbines, marine mammal behavioral changes associated with the turbines, and changes to benthic habitat associated with colonization of the subsea base support structure. In support of these plans, the project team developed and field tested a strobe-illuminated stereooptical camera system suitable for studying near-turbine interactions with marine animals. The camera system underwent short-term field testing at the proposed turbine deployment site and a multi-month endurance test in shallower water to evaluate the effectiveness of biofouling mitigation measures for the optical ports on camera and strobe pressure housings. These tests demonstrated that the camera system is likely to meet the objectives of the near-turbine monitoring plan and operate, without maintenance, for periods of at least three months. The project team also advanced monitoring capabilities related to passive acoustic monitoring of marine mammals and monitoring of tidal currents. These capabilities will be integrated in a recoverable monitoring package that has a single interface point with the OpenHydro turbines, connects to shore power and data via a wet-mate connector, and can be recovered to the surface for maintenance and reconfiguration independent of the turbine. A logical next step would be to integrate these instruments within the package, such that one instrument can trigger the operation of another.« less
NASA Astrophysics Data System (ADS)
Golubovic, Leonardo; Knudsen, Steven
2017-01-01
We consider general problem of modeling the dynamics of objects sliding on moving strings. We introduce a powerful computational algorithm that can be used to investigate the dynamics of objects sliding along non-relativistic strings. We use the algorithm to numerically explore fundamental physics of sliding climbers on a unique class of dynamical systems, Rotating Space Elevators (RSE). Objects sliding along RSE strings do not require internal engines or propulsion to be transported from the Earth's surface into outer space. By extensive numerical simulations, we find that sliding climbers may display interesting non-linear dynamics exhibiting both quasi-periodic and chaotic states of motion. While our main interest in this study is in the climber dynamics on RSEs, our results for the dynamics of sliding object are of more general interest. In particular, we designed tools capable of dealing with strongly nonlinear phenomena involving moving strings of any kind, such as the chaotic dynamics of sliding climbers observed in our simulations.
Migration of comets to near-Earth space
NASA Astrophysics Data System (ADS)
Ipatov, S. I.
The orbital evolution of more than 21000 Jupiter-crossing objects under the gravitational influence of planets was investigated. For orbits close to that of Comet 2P, the mean collision probabilities of Jupiter-crossing objects with the terrestrial planets were greater by two orders of magnitude than for some other comets. For initial orbital elements close to those of Comets 2P, 10P, 44P, and 113P, a few objects (<0.1%) got Earth-crossing orbits with semi-major axes a<2 AU and aphelion distances Q<4.2 AU and moved in such orbits for more than 1 Myr (up to tens or even hundreds of Myrs). Some of them even got inner-Earth orbits (Q<0.983 AU) and Aten orbits for millions of years. Most former trans-Neptunian objects that have typical near-Earth object orbits moved in such orbits for millions of years, so during most of this time they were extinct comets or disintegrated into mini-comets.
ERIC Educational Resources Information Center
Möhring, Wenke; Frick, Andrea
2013-01-01
In this study, 6-month-olds' ability to mentally rotate objects was investigated using the violation-of-expectation paradigm. Forty infants watched an asymmetric object being moved straight down behind an occluder. When the occluder was lowered, it revealed the original object (possible) or its mirror image (impossible) in one of five…
Representational Momentum and Children's Sensori-Motor Representations of Objects
ERIC Educational Resources Information Center
Perry, Lynn K.; Smith, Linda B.; Hockema, Stephen A.
2008-01-01
Recent research has shown that 2-year-olds fail at a task that ostensibly only requires the ability to understand that solid objects cannot pass through other solid objects. Two experiments were conducted in which 2- and 3-year-olds judged the stopping point of an object as it moved at varying speeds along a path and behind an occluder, stopping…
ERIC Educational Resources Information Center
Humbert, Richard
2010-01-01
A force acting on just part of an extended object (either a solid or a volume of a liquid) can cause all of it to move. That motion is due to the transmission of the force through the object by its material. This paper discusses how the force is distributed to all of the object by a gradient of stress or pressure in it, which creates the local…
A biological hierarchical model based underwater moving object detection.
Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen
2014-01-01
Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results.
A Biological Hierarchical Model Based Underwater Moving Object Detection
Shen, Jie; Fan, Tanghuai; Tang, Min; Zhang, Qian; Sun, Zhen; Huang, Fengchen
2014-01-01
Underwater moving object detection is the key for many underwater computer vision tasks, such as object recognizing, locating, and tracking. Considering the super ability in visual sensing of the underwater habitats, the visual mechanism of aquatic animals is generally regarded as the cue for establishing bionic models which are more adaptive to the underwater environments. However, the low accuracy rate and the absence of the prior knowledge learning limit their adaptation in underwater applications. Aiming to solve the problems originated from the inhomogeneous lumination and the unstable background, the mechanism of the visual information sensing and processing pattern from the eye of frogs are imitated to produce a hierarchical background model for detecting underwater objects. Firstly, the image is segmented into several subblocks. The intensity information is extracted for establishing background model which could roughly identify the object and the background regions. The texture feature of each pixel in the rough object region is further analyzed to generate the object contour precisely. Experimental results demonstrate that the proposed method gives a better performance. Compared to the traditional Gaussian background model, the completeness of the object detection is 97.92% with only 0.94% of the background region that is included in the detection results. PMID:25140194
Franz, A; Triesch, J
2010-12-01
The perception of the unity of objects, their permanence when out of sight, and the ability to perceive continuous object trajectories even during occlusion belong to the first and most important capacities that infants have to acquire. Despite much research a unified model of the development of these abilities is still missing. Here we make an attempt to provide such a unified model. We present a recurrent artificial neural network that learns to predict the motion of stimuli occluding each other and that develops representations of occluded object parts. It represents completely occluded, moving objects for several time steps and successfully predicts their reappearance after occlusion. This framework allows us to account for a broad range of experimental data. Specifically, the model explains how the perception of object unity develops, the role of the width of the occluders, and it also accounts for differences between data for moving and stationary stimuli. We demonstrate that these abilities can be acquired by learning to predict the sensory input. The model makes specific predictions and provides a unifying framework that has the potential to be extended to other visual event categories. Copyright © 2010 Elsevier Inc. All rights reserved.
Mehrdad, M; Park, H; Ramalingam, K; Fillos, J; Beckmann, K; Deur, A; Chandran, K
2014-01-01
New York City Environmental Protection in conjunction with City College of New York assessed the application of the anammox process in the reject water treatment using a moving bed biofilm reactor (MBBR) located at the 26th Ward wastewater treatment plant, in Brooklyn, NY. The single-stage nitritation/anammox MBBR was seeded with activated sludge and consequently was enriched with its own 'homegrown' anammox bacteria (AMX). Objectives of this study included collection of additional process kinetic and operating data and assessment of the effect of nitrogen loading rates on process performance. The initial target total inorganic nitrogen removal of 70% was limited by the low alkalinity concentration available in the influent reject water. Higher removals were achieved after supplementing the alkalinity by adding sodium hydroxide. Throughout startup and process optimization, quantitative real-time polymerase chain reaction (qPCR) analyses were used for monitoring the relevant species enriched in the biofilm and in the suspension. Maximum nitrogen removal rate was achieved by stimulating the growth of a thick biofilm on the carriers, and controlling the concentration of dissolved oxygen in the bulk flow and the nitrogen loading rates per surface area; all three appear to have contributed in suppressing nitrite-oxidizing bacteria activity while enriching AMX density within the biofilm.
Electromagnetic attachment mechanism
NASA Technical Reports Server (NTRS)
Monford, Leo G., Jr. (Inventor)
1992-01-01
An electromagnetic attachment mechanism is disclosed for use as an end effector of a remote manipulator system. A pair of electromagnets, each with a U-shaped magnetic core with a pull-in coil and two holding coils, are mounted by a spring suspension system on a base plate of the mechanism housing with end pole pieces adapted to move through openings in the base plate when the attractive force of the electromagnets is exerted on a strike plate of a grapple fixture affixed to a target object. The pole pieces are spaced by an air gap from the strike plate when the mechanism first contacts the grapple fixture. An individual control circuit and power source is provided for the pull-in coil and one holding coil of each electromagnet. A back-up control circuit connected to the two power sources and a third power source is provided for the remaining holding coils. When energized, the pull-in coils overcome the suspension system and air gap and are automatically de-energized when the pole pieces move to grapple and impose a preload force across the grapple interface. A battery backup is a redundant power source for each electromagnet in each individual control circuit and is automatically connected upon failure of the primary source. A centerline mounted camera and video monitor are used in cooperation with a target pattern on the reflective surface of the strike plate to effect targeting and alignment.
Obtaining representative ground water samples is important for site assessment and
remedial performance monitoring objectives. Issues which must be considered prior to initiating a ground-water monitoring program include defining monitoring goals and objectives, sampling point...
Cost considerations for long-term ecological monitoring
Caughlan, L.; Oakley, K.L.
2001-01-01
For an ecological monitoring program to be successful over the long-term, the perceived benefits of the information must justify the cost. Financial limitations will always restrict the scope of a monitoring program, hence the program’s focus must be carefully prioritized. Clearly identifying the costs and benefits of a program will assist in this prioritization process, but this is easier said than done. Frequently, the true costs of monitoring are not recognized and are, therefore, underestimated. Benefits are rarely evaluated, because they are difficult to quantify. The intent of this review is to assist the designers and managers of long-term ecological monitoring programs by providing a general framework for building and operating a cost-effective program. Previous considerations of monitoring costs have focused on sampling design optimization. We present cost considerations of monitoring in a broader context. We explore monitoring costs, including both budgetary costs, what dollars are spent on, and economic costs, which include opportunity costs. Often, the largest portion of a monitoring program budget is spent on data collection, and other, critical aspects of the program, such as scientific oversight, training, data management, quality assurance, and reporting, are neglected. Recognizing and budgeting for all program costs is therefore a key factor in a program’s longevity. The close relationship between statistical issues and cost is discussed, highlighting the importance of sampling design, replication and power, and comparing the costs of alternative designs through pilot studies and simulation modeling. A monitoring program development process that includes explicit checkpoints for considering costs is presented. The first checkpoint occurs during the setting of objectives and during sampling design optimization. The last checkpoint occurs once the basic shape of the program is known, and the costs and benefits, or alternatively the cost-effectiveness, of each program element can be evaluated. Moving into the implementation phase without careful evaluation of costs and benefits is risky because if costs are later found to exceed benefits, the program will fail. The costs of development, which can be quite high, will have been largely wasted. Realistic expectations of costs and benefits will help ensure that monitoring programs survive the early, turbulent stages of development and the challenges posed by fluctuating budgets during implementation.
Television image compression and small animal remote monitoring
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Jackson, Robert W.
1990-01-01
It was shown that a subject can reliably discriminate a difference in video image quality (using a specific commercial product) for image compression levels ranging from 384 kbits per second to 1536 kbits per second. However, their discriminations are significantly influenced by whether or not the TV camera is stable or moving and whether or not the animals are quiescent or active, which is correlated with illumination level (daylight versus night illumination, respectively). The highest video rate used here was 1.54 megabits per second, which is about 18 percent of the so-called normal TV resolution of 8.4MHz. Since this video rate was judged to be acceptable by 27 of the 34 subjects (79 percent), for monitoring the general health and status of small animals within their illuminated (lights on) cages (regardless of whether the camera was stable or moved), it suggests that an immediate Space Station Freedom to ground bandwidth reduction of about 80 percent can be tolerated without a significant loss in general monitoring capability. Another general conclusion is that the present methodology appears to be effective in quantifying visual judgments of video image quality.
Method and apparatus for monitoring the power of a laser beam
Paris, R.D.; Hackel, R.P.
1996-02-06
A method for monitoring the power of a laser beam in real time is disclosed. At least one optical fiber is placed through the laser beam, where a portion of light from the laser beam is coupled into the optical fiber. The optical fiber may be maintained in a stationary position or moved periodically over a cross section of the laser beam to couple light from each area traversed. Light reaching both fiber ends is monitored according to frequency and processed to determine the power of the laser beam. 6 figs.